Oct 29 00:40:00.176689 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 28 22:31:02 -00 2025 Oct 29 00:40:00.176729 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=54ef1c344b2a47697b32f3227bd37f41d37acb1889c1eaea33b22ce408b7b3ae Oct 29 00:40:00.176743 kernel: BIOS-provided physical RAM map: Oct 29 00:40:00.176750 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 29 00:40:00.176757 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 29 00:40:00.176764 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 29 00:40:00.176773 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Oct 29 00:40:00.176785 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Oct 29 00:40:00.176792 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 29 00:40:00.176802 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 29 00:40:00.176809 kernel: NX (Execute Disable) protection: active Oct 29 00:40:00.176817 kernel: APIC: Static calls initialized Oct 29 00:40:00.176824 kernel: SMBIOS 2.8 present. Oct 29 00:40:00.176831 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Oct 29 00:40:00.176841 kernel: DMI: Memory slots populated: 1/1 Oct 29 00:40:00.176876 kernel: Hypervisor detected: KVM Oct 29 00:40:00.176887 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Oct 29 00:40:00.176895 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 29 00:40:00.176904 kernel: kvm-clock: using sched offset of 3747485695 cycles Oct 29 00:40:00.176913 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 29 00:40:00.176922 kernel: tsc: Detected 2494.140 MHz processor Oct 29 00:40:00.176931 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 29 00:40:00.176940 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 29 00:40:00.176952 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Oct 29 00:40:00.176961 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 29 00:40:00.176969 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 29 00:40:00.176978 kernel: ACPI: Early table checksum verification disabled Oct 29 00:40:00.176987 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Oct 29 00:40:00.176995 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:40:00.177004 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:40:00.177015 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:40:00.177024 kernel: ACPI: FACS 0x000000007FFE0000 000040 Oct 29 00:40:00.177032 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:40:00.177041 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:40:00.177050 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:40:00.177058 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:40:00.177067 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Oct 29 00:40:00.177078 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Oct 29 00:40:00.177087 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Oct 29 00:40:00.177096 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Oct 29 00:40:00.177108 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Oct 29 00:40:00.177117 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Oct 29 00:40:00.177129 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Oct 29 00:40:00.177137 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 29 00:40:00.177146 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 29 00:40:00.177156 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Oct 29 00:40:00.177165 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Oct 29 00:40:00.177173 kernel: Zone ranges: Oct 29 00:40:00.177185 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 29 00:40:00.177194 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Oct 29 00:40:00.177203 kernel: Normal empty Oct 29 00:40:00.177212 kernel: Device empty Oct 29 00:40:00.177221 kernel: Movable zone start for each node Oct 29 00:40:00.177230 kernel: Early memory node ranges Oct 29 00:40:00.177239 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 29 00:40:00.177247 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Oct 29 00:40:00.177259 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Oct 29 00:40:00.177268 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 29 00:40:00.177277 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 29 00:40:00.177286 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Oct 29 00:40:00.177295 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 29 00:40:00.177307 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 29 00:40:00.177319 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 29 00:40:00.177345 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 29 00:40:00.177357 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 29 00:40:00.177370 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 29 00:40:00.177386 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 29 00:40:00.177399 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 29 00:40:00.177413 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 29 00:40:00.177423 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 29 00:40:00.177432 kernel: TSC deadline timer available Oct 29 00:40:00.177445 kernel: CPU topo: Max. logical packages: 1 Oct 29 00:40:00.177454 kernel: CPU topo: Max. logical dies: 1 Oct 29 00:40:00.177462 kernel: CPU topo: Max. dies per package: 1 Oct 29 00:40:00.177471 kernel: CPU topo: Max. threads per core: 1 Oct 29 00:40:00.177480 kernel: CPU topo: Num. cores per package: 2 Oct 29 00:40:00.177489 kernel: CPU topo: Num. threads per package: 2 Oct 29 00:40:00.177498 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Oct 29 00:40:00.177509 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 29 00:40:00.177518 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Oct 29 00:40:00.177527 kernel: Booting paravirtualized kernel on KVM Oct 29 00:40:00.177536 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 29 00:40:00.177545 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 29 00:40:00.177554 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Oct 29 00:40:00.177563 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Oct 29 00:40:00.177575 kernel: pcpu-alloc: [0] 0 1 Oct 29 00:40:00.177584 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 29 00:40:00.177594 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=54ef1c344b2a47697b32f3227bd37f41d37acb1889c1eaea33b22ce408b7b3ae Oct 29 00:40:00.177604 kernel: random: crng init done Oct 29 00:40:00.177613 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 29 00:40:00.177622 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 29 00:40:00.177631 kernel: Fallback order for Node 0: 0 Oct 29 00:40:00.177643 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Oct 29 00:40:00.177652 kernel: Policy zone: DMA32 Oct 29 00:40:00.177660 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 29 00:40:00.177670 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 29 00:40:00.177678 kernel: Kernel/User page tables isolation: enabled Oct 29 00:40:00.177688 kernel: ftrace: allocating 40092 entries in 157 pages Oct 29 00:40:00.177697 kernel: ftrace: allocated 157 pages with 5 groups Oct 29 00:40:00.177709 kernel: Dynamic Preempt: voluntary Oct 29 00:40:00.177718 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 29 00:40:00.177728 kernel: rcu: RCU event tracing is enabled. Oct 29 00:40:00.177737 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 29 00:40:00.177746 kernel: Trampoline variant of Tasks RCU enabled. Oct 29 00:40:00.177755 kernel: Rude variant of Tasks RCU enabled. Oct 29 00:40:00.177764 kernel: Tracing variant of Tasks RCU enabled. Oct 29 00:40:00.177773 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 29 00:40:00.177785 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 29 00:40:00.177794 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 29 00:40:00.177806 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 29 00:40:00.177815 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 29 00:40:00.177824 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 29 00:40:00.177834 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 29 00:40:00.177842 kernel: Console: colour VGA+ 80x25 Oct 29 00:40:00.177872 kernel: printk: legacy console [tty0] enabled Oct 29 00:40:00.177889 kernel: printk: legacy console [ttyS0] enabled Oct 29 00:40:00.177901 kernel: ACPI: Core revision 20240827 Oct 29 00:40:00.177916 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 29 00:40:00.177942 kernel: APIC: Switch to symmetric I/O mode setup Oct 29 00:40:00.177959 kernel: x2apic enabled Oct 29 00:40:00.177973 kernel: APIC: Switched APIC routing to: physical x2apic Oct 29 00:40:00.177988 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 29 00:40:00.178002 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Oct 29 00:40:00.178024 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Oct 29 00:40:00.178039 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 29 00:40:00.178053 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 29 00:40:00.178068 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 29 00:40:00.178084 kernel: Spectre V2 : Mitigation: Retpolines Oct 29 00:40:00.178094 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 29 00:40:00.178104 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Oct 29 00:40:00.178114 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 29 00:40:00.178124 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 29 00:40:00.178133 kernel: MDS: Mitigation: Clear CPU buffers Oct 29 00:40:00.178143 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 29 00:40:00.178156 kernel: active return thunk: its_return_thunk Oct 29 00:40:00.178165 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 29 00:40:00.178175 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 29 00:40:00.178185 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 29 00:40:00.178194 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 29 00:40:00.178204 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 29 00:40:00.178214 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 29 00:40:00.178226 kernel: Freeing SMP alternatives memory: 32K Oct 29 00:40:00.178235 kernel: pid_max: default: 32768 minimum: 301 Oct 29 00:40:00.178245 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 29 00:40:00.178254 kernel: landlock: Up and running. Oct 29 00:40:00.178263 kernel: SELinux: Initializing. Oct 29 00:40:00.178273 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 29 00:40:00.178283 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 29 00:40:00.178295 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Oct 29 00:40:00.178305 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Oct 29 00:40:00.178314 kernel: signal: max sigframe size: 1776 Oct 29 00:40:00.178324 kernel: rcu: Hierarchical SRCU implementation. Oct 29 00:40:00.178334 kernel: rcu: Max phase no-delay instances is 400. Oct 29 00:40:00.178343 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 29 00:40:00.178353 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 29 00:40:00.178365 kernel: smp: Bringing up secondary CPUs ... Oct 29 00:40:00.178378 kernel: smpboot: x86: Booting SMP configuration: Oct 29 00:40:00.178388 kernel: .... node #0, CPUs: #1 Oct 29 00:40:00.178397 kernel: smp: Brought up 1 node, 2 CPUs Oct 29 00:40:00.178407 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Oct 29 00:40:00.178417 kernel: Memory: 1989436K/2096612K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 102612K reserved, 0K cma-reserved) Oct 29 00:40:00.178427 kernel: devtmpfs: initialized Oct 29 00:40:00.178439 kernel: x86/mm: Memory block size: 128MB Oct 29 00:40:00.178449 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 29 00:40:00.178459 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 29 00:40:00.178468 kernel: pinctrl core: initialized pinctrl subsystem Oct 29 00:40:00.178478 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 29 00:40:00.178488 kernel: audit: initializing netlink subsys (disabled) Oct 29 00:40:00.178498 kernel: audit: type=2000 audit(1761698397.894:1): state=initialized audit_enabled=0 res=1 Oct 29 00:40:00.178510 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 29 00:40:00.178520 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 29 00:40:00.178529 kernel: cpuidle: using governor menu Oct 29 00:40:00.178539 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 29 00:40:00.178549 kernel: dca service started, version 1.12.1 Oct 29 00:40:00.178558 kernel: PCI: Using configuration type 1 for base access Oct 29 00:40:00.178568 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 29 00:40:00.178578 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 29 00:40:00.178590 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 29 00:40:00.178600 kernel: ACPI: Added _OSI(Module Device) Oct 29 00:40:00.178610 kernel: ACPI: Added _OSI(Processor Device) Oct 29 00:40:00.178619 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 29 00:40:00.178629 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 29 00:40:00.178638 kernel: ACPI: Interpreter enabled Oct 29 00:40:00.178648 kernel: ACPI: PM: (supports S0 S5) Oct 29 00:40:00.178660 kernel: ACPI: Using IOAPIC for interrupt routing Oct 29 00:40:00.178670 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 29 00:40:00.178679 kernel: PCI: Using E820 reservations for host bridge windows Oct 29 00:40:00.178689 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 29 00:40:00.178698 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 29 00:40:00.178996 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 29 00:40:00.179162 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Oct 29 00:40:00.179310 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Oct 29 00:40:00.179324 kernel: acpiphp: Slot [3] registered Oct 29 00:40:00.179334 kernel: acpiphp: Slot [4] registered Oct 29 00:40:00.179344 kernel: acpiphp: Slot [5] registered Oct 29 00:40:00.179354 kernel: acpiphp: Slot [6] registered Oct 29 00:40:00.179907 kernel: acpiphp: Slot [7] registered Oct 29 00:40:00.179929 kernel: acpiphp: Slot [8] registered Oct 29 00:40:00.179939 kernel: acpiphp: Slot [9] registered Oct 29 00:40:00.179949 kernel: acpiphp: Slot [10] registered Oct 29 00:40:00.179958 kernel: acpiphp: Slot [11] registered Oct 29 00:40:00.179968 kernel: acpiphp: Slot [12] registered Oct 29 00:40:00.179978 kernel: acpiphp: Slot [13] registered Oct 29 00:40:00.179988 kernel: acpiphp: Slot [14] registered Oct 29 00:40:00.180000 kernel: acpiphp: Slot [15] registered Oct 29 00:40:00.180010 kernel: acpiphp: Slot [16] registered Oct 29 00:40:00.180020 kernel: acpiphp: Slot [17] registered Oct 29 00:40:00.180029 kernel: acpiphp: Slot [18] registered Oct 29 00:40:00.180039 kernel: acpiphp: Slot [19] registered Oct 29 00:40:00.180049 kernel: acpiphp: Slot [20] registered Oct 29 00:40:00.180059 kernel: acpiphp: Slot [21] registered Oct 29 00:40:00.180071 kernel: acpiphp: Slot [22] registered Oct 29 00:40:00.180081 kernel: acpiphp: Slot [23] registered Oct 29 00:40:00.180090 kernel: acpiphp: Slot [24] registered Oct 29 00:40:00.180100 kernel: acpiphp: Slot [25] registered Oct 29 00:40:00.180110 kernel: acpiphp: Slot [26] registered Oct 29 00:40:00.180120 kernel: acpiphp: Slot [27] registered Oct 29 00:40:00.180129 kernel: acpiphp: Slot [28] registered Oct 29 00:40:00.180139 kernel: acpiphp: Slot [29] registered Oct 29 00:40:00.180151 kernel: acpiphp: Slot [30] registered Oct 29 00:40:00.180161 kernel: acpiphp: Slot [31] registered Oct 29 00:40:00.180171 kernel: PCI host bridge to bus 0000:00 Oct 29 00:40:00.180364 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 29 00:40:00.180492 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 29 00:40:00.180614 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 29 00:40:00.180736 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Oct 29 00:40:00.180868 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Oct 29 00:40:00.180988 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 29 00:40:00.181197 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Oct 29 00:40:00.181343 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Oct 29 00:40:00.181494 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Oct 29 00:40:00.181702 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Oct 29 00:40:00.182426 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Oct 29 00:40:00.182591 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Oct 29 00:40:00.182725 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Oct 29 00:40:00.182892 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Oct 29 00:40:00.183075 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Oct 29 00:40:00.183211 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Oct 29 00:40:00.183349 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Oct 29 00:40:00.183481 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 29 00:40:00.183610 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 29 00:40:00.183794 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Oct 29 00:40:00.183955 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Oct 29 00:40:00.184090 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Oct 29 00:40:00.184221 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Oct 29 00:40:00.184356 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Oct 29 00:40:00.184486 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 29 00:40:00.184694 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 29 00:40:00.184838 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Oct 29 00:40:00.184987 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Oct 29 00:40:00.185117 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Oct 29 00:40:00.185279 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 29 00:40:00.185450 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Oct 29 00:40:00.185585 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Oct 29 00:40:00.185718 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Oct 29 00:40:00.186188 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Oct 29 00:40:00.189084 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Oct 29 00:40:00.189243 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Oct 29 00:40:00.189385 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Oct 29 00:40:00.189529 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 29 00:40:00.189663 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Oct 29 00:40:00.189801 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Oct 29 00:40:00.189959 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Oct 29 00:40:00.190100 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 29 00:40:00.190241 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Oct 29 00:40:00.190396 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Oct 29 00:40:00.190591 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Oct 29 00:40:00.190887 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Oct 29 00:40:00.191065 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Oct 29 00:40:00.191210 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Oct 29 00:40:00.191223 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 29 00:40:00.191234 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 29 00:40:00.191245 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 29 00:40:00.191255 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 29 00:40:00.191264 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 29 00:40:00.191274 kernel: iommu: Default domain type: Translated Oct 29 00:40:00.191288 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 29 00:40:00.191304 kernel: PCI: Using ACPI for IRQ routing Oct 29 00:40:00.191319 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 29 00:40:00.191332 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 29 00:40:00.191347 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Oct 29 00:40:00.191528 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 29 00:40:00.191695 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 29 00:40:00.191867 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 29 00:40:00.191881 kernel: vgaarb: loaded Oct 29 00:40:00.191891 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 29 00:40:00.191901 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 29 00:40:00.191911 kernel: clocksource: Switched to clocksource kvm-clock Oct 29 00:40:00.191921 kernel: VFS: Disk quotas dquot_6.6.0 Oct 29 00:40:00.191931 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 29 00:40:00.191946 kernel: pnp: PnP ACPI init Oct 29 00:40:00.191956 kernel: pnp: PnP ACPI: found 4 devices Oct 29 00:40:00.191967 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 29 00:40:00.191979 kernel: NET: Registered PF_INET protocol family Oct 29 00:40:00.191996 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 29 00:40:00.192011 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 29 00:40:00.192027 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 29 00:40:00.192045 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 29 00:40:00.192059 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 29 00:40:00.192073 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 29 00:40:00.192087 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 29 00:40:00.192100 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 29 00:40:00.192114 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 29 00:40:00.192129 kernel: NET: Registered PF_XDP protocol family Oct 29 00:40:00.192353 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 29 00:40:00.193466 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 29 00:40:00.196003 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 29 00:40:00.196155 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Oct 29 00:40:00.196280 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Oct 29 00:40:00.196442 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 29 00:40:00.196687 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 29 00:40:00.196711 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 29 00:40:00.198957 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 39884 usecs Oct 29 00:40:00.199016 kernel: PCI: CLS 0 bytes, default 64 Oct 29 00:40:00.199032 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 29 00:40:00.199043 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Oct 29 00:40:00.199053 kernel: Initialise system trusted keyrings Oct 29 00:40:00.199073 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 29 00:40:00.199083 kernel: Key type asymmetric registered Oct 29 00:40:00.199092 kernel: Asymmetric key parser 'x509' registered Oct 29 00:40:00.199102 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 29 00:40:00.199112 kernel: io scheduler mq-deadline registered Oct 29 00:40:00.199122 kernel: io scheduler kyber registered Oct 29 00:40:00.199132 kernel: io scheduler bfq registered Oct 29 00:40:00.199144 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 29 00:40:00.199155 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Oct 29 00:40:00.199165 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 29 00:40:00.199175 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 29 00:40:00.199185 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 29 00:40:00.199194 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 29 00:40:00.199204 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 29 00:40:00.199217 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 29 00:40:00.199226 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 29 00:40:00.199414 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 29 00:40:00.199430 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 29 00:40:00.199574 kernel: rtc_cmos 00:03: registered as rtc0 Oct 29 00:40:00.199751 kernel: rtc_cmos 00:03: setting system clock to 2025-10-29T00:39:58 UTC (1761698398) Oct 29 00:40:00.199942 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Oct 29 00:40:00.199957 kernel: intel_pstate: CPU model not supported Oct 29 00:40:00.199967 kernel: NET: Registered PF_INET6 protocol family Oct 29 00:40:00.199978 kernel: Segment Routing with IPv6 Oct 29 00:40:00.199988 kernel: In-situ OAM (IOAM) with IPv6 Oct 29 00:40:00.199998 kernel: NET: Registered PF_PACKET protocol family Oct 29 00:40:00.200008 kernel: Key type dns_resolver registered Oct 29 00:40:00.200019 kernel: IPI shorthand broadcast: enabled Oct 29 00:40:00.200033 kernel: sched_clock: Marking stable (1312003499, 145034559)->(1479260844, -22222786) Oct 29 00:40:00.200043 kernel: registered taskstats version 1 Oct 29 00:40:00.200053 kernel: Loading compiled-in X.509 certificates Oct 29 00:40:00.200063 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 4eb70affb0e364bb9bcbea2a9416e57c31aed070' Oct 29 00:40:00.200073 kernel: Demotion targets for Node 0: null Oct 29 00:40:00.200083 kernel: Key type .fscrypt registered Oct 29 00:40:00.200093 kernel: Key type fscrypt-provisioning registered Oct 29 00:40:00.200120 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 29 00:40:00.200135 kernel: ima: Allocated hash algorithm: sha1 Oct 29 00:40:00.200146 kernel: ima: No architecture policies found Oct 29 00:40:00.200156 kernel: clk: Disabling unused clocks Oct 29 00:40:00.200166 kernel: Freeing unused kernel image (initmem) memory: 15964K Oct 29 00:40:00.200177 kernel: Write protecting the kernel read-only data: 40960k Oct 29 00:40:00.200187 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Oct 29 00:40:00.200200 kernel: Run /init as init process Oct 29 00:40:00.200213 kernel: with arguments: Oct 29 00:40:00.200223 kernel: /init Oct 29 00:40:00.200233 kernel: with environment: Oct 29 00:40:00.200243 kernel: HOME=/ Oct 29 00:40:00.200253 kernel: TERM=linux Oct 29 00:40:00.200263 kernel: SCSI subsystem initialized Oct 29 00:40:00.200276 kernel: libata version 3.00 loaded. Oct 29 00:40:00.200425 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 29 00:40:00.200608 kernel: scsi host0: ata_piix Oct 29 00:40:00.200758 kernel: scsi host1: ata_piix Oct 29 00:40:00.200777 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Oct 29 00:40:00.200787 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Oct 29 00:40:00.202772 kernel: ACPI: bus type USB registered Oct 29 00:40:00.202787 kernel: usbcore: registered new interface driver usbfs Oct 29 00:40:00.202798 kernel: usbcore: registered new interface driver hub Oct 29 00:40:00.202808 kernel: usbcore: registered new device driver usb Oct 29 00:40:00.203151 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Oct 29 00:40:00.203296 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Oct 29 00:40:00.203477 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Oct 29 00:40:00.203624 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Oct 29 00:40:00.203808 kernel: hub 1-0:1.0: USB hub found Oct 29 00:40:00.205127 kernel: hub 1-0:1.0: 2 ports detected Oct 29 00:40:00.205310 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Oct 29 00:40:00.205460 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Oct 29 00:40:00.205475 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 29 00:40:00.205486 kernel: GPT:16515071 != 125829119 Oct 29 00:40:00.205497 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 29 00:40:00.205507 kernel: GPT:16515071 != 125829119 Oct 29 00:40:00.205520 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 29 00:40:00.205531 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 29 00:40:00.205676 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Oct 29 00:40:00.205810 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Oct 29 00:40:00.208937 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Oct 29 00:40:00.209133 kernel: scsi host2: Virtio SCSI HBA Oct 29 00:40:00.209159 kernel: Invalid ELF header magic: != \u007fELF Oct 29 00:40:00.209170 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 29 00:40:00.209181 kernel: device-mapper: uevent: version 1.0.3 Oct 29 00:40:00.209192 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 29 00:40:00.209204 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 29 00:40:00.209214 kernel: Invalid ELF header magic: != \u007fELF Oct 29 00:40:00.209224 kernel: Invalid ELF header magic: != \u007fELF Oct 29 00:40:00.209236 kernel: raid6: avx2x4 gen() 17876 MB/s Oct 29 00:40:00.209247 kernel: raid6: avx2x2 gen() 19454 MB/s Oct 29 00:40:00.209258 kernel: raid6: avx2x1 gen() 17169 MB/s Oct 29 00:40:00.209269 kernel: raid6: using algorithm avx2x2 gen() 19454 MB/s Oct 29 00:40:00.209279 kernel: raid6: .... xor() 15610 MB/s, rmw enabled Oct 29 00:40:00.209290 kernel: raid6: using avx2x2 recovery algorithm Oct 29 00:40:00.209300 kernel: Invalid ELF header magic: != \u007fELF Oct 29 00:40:00.209312 kernel: Invalid ELF header magic: != \u007fELF Oct 29 00:40:00.209322 kernel: Invalid ELF header magic: != \u007fELF Oct 29 00:40:00.209333 kernel: xor: automatically using best checksumming function avx Oct 29 00:40:00.209343 kernel: Invalid ELF header magic: != \u007fELF Oct 29 00:40:00.209353 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 29 00:40:00.209364 kernel: BTRFS: device fsid c0171910-1eb4-4fd7-b94c-9d6b11be282f devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (157) Oct 29 00:40:00.209374 kernel: BTRFS info (device dm-0): first mount of filesystem c0171910-1eb4-4fd7-b94c-9d6b11be282f Oct 29 00:40:00.209385 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 29 00:40:00.209398 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 29 00:40:00.209409 kernel: BTRFS info (device dm-0): enabling free space tree Oct 29 00:40:00.209420 kernel: Invalid ELF header magic: != \u007fELF Oct 29 00:40:00.209430 kernel: loop: module loaded Oct 29 00:40:00.209441 kernel: loop0: detected capacity change from 0 to 100120 Oct 29 00:40:00.209451 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 29 00:40:00.209463 systemd[1]: Successfully made /usr/ read-only. Oct 29 00:40:00.209481 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 29 00:40:00.209492 systemd[1]: Detected virtualization kvm. Oct 29 00:40:00.209503 systemd[1]: Detected architecture x86-64. Oct 29 00:40:00.209513 systemd[1]: Running in initrd. Oct 29 00:40:00.209523 systemd[1]: No hostname configured, using default hostname. Oct 29 00:40:00.209538 systemd[1]: Hostname set to . Oct 29 00:40:00.209548 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 29 00:40:00.209559 systemd[1]: Queued start job for default target initrd.target. Oct 29 00:40:00.209570 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 29 00:40:00.209581 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 00:40:00.209592 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 00:40:00.209604 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 29 00:40:00.209617 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 29 00:40:00.209629 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 29 00:40:00.209640 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 29 00:40:00.209651 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 00:40:00.209662 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 29 00:40:00.209676 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 29 00:40:00.209687 systemd[1]: Reached target paths.target - Path Units. Oct 29 00:40:00.209698 systemd[1]: Reached target slices.target - Slice Units. Oct 29 00:40:00.209708 systemd[1]: Reached target swap.target - Swaps. Oct 29 00:40:00.209719 systemd[1]: Reached target timers.target - Timer Units. Oct 29 00:40:00.209730 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 29 00:40:00.209741 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 29 00:40:00.209754 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 29 00:40:00.209765 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 29 00:40:00.209776 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 29 00:40:00.209787 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 29 00:40:00.209799 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 00:40:00.209810 systemd[1]: Reached target sockets.target - Socket Units. Oct 29 00:40:00.209821 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 29 00:40:00.209834 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 29 00:40:00.209845 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 29 00:40:00.209882 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 29 00:40:00.209894 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 29 00:40:00.209905 systemd[1]: Starting systemd-fsck-usr.service... Oct 29 00:40:00.209916 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 29 00:40:00.209927 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 29 00:40:00.209940 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 00:40:00.209952 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 29 00:40:00.209963 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 00:40:00.209977 systemd[1]: Finished systemd-fsck-usr.service. Oct 29 00:40:00.209988 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 29 00:40:00.210037 systemd-journald[293]: Collecting audit messages is disabled. Oct 29 00:40:00.210065 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 29 00:40:00.210078 kernel: Bridge firewalling registered Oct 29 00:40:00.210095 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 29 00:40:00.210113 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 29 00:40:00.210131 systemd-journald[293]: Journal started Oct 29 00:40:00.210160 systemd-journald[293]: Runtime Journal (/run/log/journal/2680a907996e4f068645bf51f075bb64) is 4.9M, max 39.2M, 34.3M free. Oct 29 00:40:00.198703 systemd-modules-load[294]: Inserted module 'br_netfilter' Oct 29 00:40:00.213935 systemd[1]: Started systemd-journald.service - Journal Service. Oct 29 00:40:00.214804 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 29 00:40:00.223887 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 29 00:40:00.281042 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 29 00:40:00.283501 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:40:00.285279 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 29 00:40:00.289226 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 29 00:40:00.296239 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 29 00:40:00.300731 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 00:40:00.312030 systemd-tmpfiles[314]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 29 00:40:00.324086 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 00:40:00.331108 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 29 00:40:00.334084 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 29 00:40:00.370299 systemd-resolved[319]: Positive Trust Anchors: Oct 29 00:40:00.371159 systemd-resolved[319]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 00:40:00.371167 systemd-resolved[319]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 29 00:40:00.384833 dracut-cmdline[334]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=54ef1c344b2a47697b32f3227bd37f41d37acb1889c1eaea33b22ce408b7b3ae Oct 29 00:40:00.371205 systemd-resolved[319]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 29 00:40:00.424440 systemd-resolved[319]: Defaulting to hostname 'linux'. Oct 29 00:40:00.426808 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 29 00:40:00.428213 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 29 00:40:00.569902 kernel: Loading iSCSI transport class v2.0-870. Oct 29 00:40:00.594895 kernel: iscsi: registered transport (tcp) Oct 29 00:40:00.629919 kernel: iscsi: registered transport (qla4xxx) Oct 29 00:40:00.630074 kernel: QLogic iSCSI HBA Driver Oct 29 00:40:00.683149 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 29 00:40:00.726010 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 00:40:00.730285 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 29 00:40:00.825764 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 29 00:40:00.829894 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 29 00:40:00.833153 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 29 00:40:00.896721 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 29 00:40:00.901597 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 00:40:00.951943 systemd-udevd[573]: Using default interface naming scheme 'v257'. Oct 29 00:40:00.973067 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 00:40:00.979076 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 29 00:40:01.027699 dracut-pre-trigger[637]: rd.md=0: removing MD RAID activation Oct 29 00:40:01.057687 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 29 00:40:01.064163 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 29 00:40:01.099593 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 29 00:40:01.106132 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 29 00:40:01.149502 systemd-networkd[692]: lo: Link UP Oct 29 00:40:01.149517 systemd-networkd[692]: lo: Gained carrier Oct 29 00:40:01.153158 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 29 00:40:01.154029 systemd[1]: Reached target network.target - Network. Oct 29 00:40:01.229323 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 00:40:01.235259 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 29 00:40:01.396581 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 29 00:40:01.425123 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 29 00:40:01.443030 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 29 00:40:01.461731 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 29 00:40:01.466017 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 29 00:40:01.504010 disk-uuid[737]: Primary Header is updated. Oct 29 00:40:01.504010 disk-uuid[737]: Secondary Entries is updated. Oct 29 00:40:01.504010 disk-uuid[737]: Secondary Header is updated. Oct 29 00:40:01.508247 kernel: cryptd: max_cpu_qlen set to 1000 Oct 29 00:40:01.570894 kernel: AES CTR mode by8 optimization enabled Oct 29 00:40:01.610381 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 29 00:40:01.647811 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 00:40:01.647996 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:40:01.649263 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 00:40:01.655647 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 00:40:01.670627 systemd-networkd[692]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Oct 29 00:40:01.670642 systemd-networkd[692]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Oct 29 00:40:01.672092 systemd-networkd[692]: eth0: Link UP Oct 29 00:40:01.673633 systemd-networkd[692]: eth0: Gained carrier Oct 29 00:40:01.673658 systemd-networkd[692]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Oct 29 00:40:01.687043 systemd-networkd[692]: eth0: DHCPv4 address 64.23.202.85/19, gateway 64.23.192.1 acquired from 169.254.169.253 Oct 29 00:40:01.695085 systemd-networkd[692]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 00:40:01.695097 systemd-networkd[692]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 00:40:01.701041 systemd-networkd[692]: eth1: Link UP Oct 29 00:40:01.701410 systemd-networkd[692]: eth1: Gained carrier Oct 29 00:40:01.701430 systemd-networkd[692]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 00:40:01.714029 systemd-networkd[692]: eth1: DHCPv4 address 10.124.0.15/20 acquired from 169.254.169.253 Oct 29 00:40:01.820433 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 29 00:40:01.826537 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:40:01.829512 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 29 00:40:01.830513 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 00:40:01.831720 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 29 00:40:01.834311 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 29 00:40:01.876310 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 29 00:40:02.614261 disk-uuid[739]: Warning: The kernel is still using the old partition table. Oct 29 00:40:02.614261 disk-uuid[739]: The new table will be used at the next reboot or after you Oct 29 00:40:02.614261 disk-uuid[739]: run partprobe(8) or kpartx(8) Oct 29 00:40:02.614261 disk-uuid[739]: The operation has completed successfully. Oct 29 00:40:02.622252 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 29 00:40:02.622385 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 29 00:40:02.625246 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 29 00:40:02.668936 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (831) Oct 29 00:40:02.672224 kernel: BTRFS info (device vda6): first mount of filesystem ba5c42d5-4e97-4410-b3e4-abc54f9b4dae Oct 29 00:40:02.672400 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 00:40:02.679276 kernel: BTRFS info (device vda6): turning on async discard Oct 29 00:40:02.679387 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 00:40:02.689901 kernel: BTRFS info (device vda6): last unmount of filesystem ba5c42d5-4e97-4410-b3e4-abc54f9b4dae Oct 29 00:40:02.690728 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 29 00:40:02.694026 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 29 00:40:02.987233 ignition[850]: Ignition 2.22.0 Oct 29 00:40:02.988362 ignition[850]: Stage: fetch-offline Oct 29 00:40:02.988445 ignition[850]: no configs at "/usr/lib/ignition/base.d" Oct 29 00:40:02.988465 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 29 00:40:02.992041 ignition[850]: parsed url from cmdline: "" Oct 29 00:40:02.992054 ignition[850]: no config URL provided Oct 29 00:40:02.992068 ignition[850]: reading system config file "/usr/lib/ignition/user.ign" Oct 29 00:40:02.992103 ignition[850]: no config at "/usr/lib/ignition/user.ign" Oct 29 00:40:02.992111 ignition[850]: failed to fetch config: resource requires networking Oct 29 00:40:02.994382 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 29 00:40:02.992479 ignition[850]: Ignition finished successfully Oct 29 00:40:02.998180 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 29 00:40:03.050848 ignition[857]: Ignition 2.22.0 Oct 29 00:40:03.050877 ignition[857]: Stage: fetch Oct 29 00:40:03.051252 ignition[857]: no configs at "/usr/lib/ignition/base.d" Oct 29 00:40:03.051270 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 29 00:40:03.051417 ignition[857]: parsed url from cmdline: "" Oct 29 00:40:03.051424 ignition[857]: no config URL provided Oct 29 00:40:03.051433 ignition[857]: reading system config file "/usr/lib/ignition/user.ign" Oct 29 00:40:03.051444 ignition[857]: no config at "/usr/lib/ignition/user.ign" Oct 29 00:40:03.051482 ignition[857]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Oct 29 00:40:03.236835 ignition[857]: GET result: OK Oct 29 00:40:03.237814 ignition[857]: parsing config with SHA512: 58203c10455d728d1bae9acc93a36a04faa71cff54f781ab7ab5ab48be53d97bfdcdc327e5e06bd7f5f910362aeed6e2e7f35d18a440a6688ff517f6b7bea4dd Oct 29 00:40:03.243129 unknown[857]: fetched base config from "system" Oct 29 00:40:03.243767 ignition[857]: fetch: fetch complete Oct 29 00:40:03.243171 unknown[857]: fetched base config from "system" Oct 29 00:40:03.243774 ignition[857]: fetch: fetch passed Oct 29 00:40:03.243181 unknown[857]: fetched user config from "digitalocean" Oct 29 00:40:03.243869 ignition[857]: Ignition finished successfully Oct 29 00:40:03.248465 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 29 00:40:03.251606 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 29 00:40:03.296006 ignition[863]: Ignition 2.22.0 Oct 29 00:40:03.296020 ignition[863]: Stage: kargs Oct 29 00:40:03.296252 ignition[863]: no configs at "/usr/lib/ignition/base.d" Oct 29 00:40:03.296263 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 29 00:40:03.297828 ignition[863]: kargs: kargs passed Oct 29 00:40:03.297932 ignition[863]: Ignition finished successfully Oct 29 00:40:03.302038 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 29 00:40:03.305122 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 29 00:40:03.348031 ignition[870]: Ignition 2.22.0 Oct 29 00:40:03.348061 ignition[870]: Stage: disks Oct 29 00:40:03.348229 ignition[870]: no configs at "/usr/lib/ignition/base.d" Oct 29 00:40:03.348238 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 29 00:40:03.350902 ignition[870]: disks: disks passed Oct 29 00:40:03.351004 ignition[870]: Ignition finished successfully Oct 29 00:40:03.353251 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 29 00:40:03.354072 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 29 00:40:03.354743 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 29 00:40:03.355809 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 29 00:40:03.356718 systemd[1]: Reached target sysinit.target - System Initialization. Oct 29 00:40:03.357614 systemd[1]: Reached target basic.target - Basic System. Oct 29 00:40:03.360242 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 29 00:40:03.411951 systemd-fsck[878]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 29 00:40:03.414327 systemd-networkd[692]: eth0: Gained IPv6LL Oct 29 00:40:03.416793 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 29 00:40:03.420661 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 29 00:40:03.540628 systemd-networkd[692]: eth1: Gained IPv6LL Oct 29 00:40:03.572885 kernel: EXT4-fs (vda9): mounted filesystem ef53721c-fae5-4ad9-8976-8181c84bc175 r/w with ordered data mode. Quota mode: none. Oct 29 00:40:03.574189 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 29 00:40:03.575780 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 29 00:40:03.579411 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 29 00:40:03.582458 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 29 00:40:03.589214 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Oct 29 00:40:03.595751 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 29 00:40:03.596386 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 29 00:40:03.596447 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 29 00:40:03.606904 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (887) Oct 29 00:40:03.609884 kernel: BTRFS info (device vda6): first mount of filesystem ba5c42d5-4e97-4410-b3e4-abc54f9b4dae Oct 29 00:40:03.611913 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 00:40:03.616284 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 29 00:40:03.625927 kernel: BTRFS info (device vda6): turning on async discard Oct 29 00:40:03.626022 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 00:40:03.619665 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 29 00:40:03.637727 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 29 00:40:03.732462 coreos-metadata[890]: Oct 29 00:40:03.732 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 29 00:40:03.744408 coreos-metadata[890]: Oct 29 00:40:03.743 INFO Fetch successful Oct 29 00:40:03.756385 coreos-metadata[889]: Oct 29 00:40:03.756 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 29 00:40:03.757932 coreos-metadata[890]: Oct 29 00:40:03.756 INFO wrote hostname ci-4487.0.0-n-61970e6314 to /sysroot/etc/hostname Oct 29 00:40:03.760601 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 29 00:40:03.764050 initrd-setup-root[919]: cut: /sysroot/etc/passwd: No such file or directory Oct 29 00:40:03.767793 coreos-metadata[889]: Oct 29 00:40:03.767 INFO Fetch successful Oct 29 00:40:03.777927 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Oct 29 00:40:03.778765 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Oct 29 00:40:03.780813 initrd-setup-root[927]: cut: /sysroot/etc/group: No such file or directory Oct 29 00:40:03.788031 initrd-setup-root[935]: cut: /sysroot/etc/shadow: No such file or directory Oct 29 00:40:03.796099 initrd-setup-root[942]: cut: /sysroot/etc/gshadow: No such file or directory Oct 29 00:40:03.939283 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 29 00:40:03.941461 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 29 00:40:03.943749 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 29 00:40:03.968384 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 29 00:40:03.969871 kernel: BTRFS info (device vda6): last unmount of filesystem ba5c42d5-4e97-4410-b3e4-abc54f9b4dae Oct 29 00:40:03.989041 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 29 00:40:04.016574 ignition[1010]: INFO : Ignition 2.22.0 Oct 29 00:40:04.016574 ignition[1010]: INFO : Stage: mount Oct 29 00:40:04.018433 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 00:40:04.018433 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 29 00:40:04.021429 ignition[1010]: INFO : mount: mount passed Oct 29 00:40:04.021429 ignition[1010]: INFO : Ignition finished successfully Oct 29 00:40:04.021975 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 29 00:40:04.025431 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 29 00:40:04.051131 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 29 00:40:04.078183 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1023) Oct 29 00:40:04.078279 kernel: BTRFS info (device vda6): first mount of filesystem ba5c42d5-4e97-4410-b3e4-abc54f9b4dae Oct 29 00:40:04.079935 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 00:40:04.085017 kernel: BTRFS info (device vda6): turning on async discard Oct 29 00:40:04.085153 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 00:40:04.088978 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 29 00:40:04.127870 ignition[1039]: INFO : Ignition 2.22.0 Oct 29 00:40:04.127870 ignition[1039]: INFO : Stage: files Oct 29 00:40:04.129408 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 00:40:04.129408 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 29 00:40:04.129408 ignition[1039]: DEBUG : files: compiled without relabeling support, skipping Oct 29 00:40:04.131488 ignition[1039]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 29 00:40:04.131488 ignition[1039]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 29 00:40:04.135645 ignition[1039]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 29 00:40:04.136469 ignition[1039]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 29 00:40:04.137513 unknown[1039]: wrote ssh authorized keys file for user: core Oct 29 00:40:04.138460 ignition[1039]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 29 00:40:04.139584 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 29 00:40:04.140417 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 29 00:40:04.166780 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 29 00:40:04.284473 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 29 00:40:04.284473 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 29 00:40:04.286654 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 29 00:40:04.286654 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 29 00:40:04.286654 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 29 00:40:04.286654 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 00:40:04.286654 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 00:40:04.286654 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 00:40:04.291833 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 00:40:04.291833 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 00:40:04.291833 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 00:40:04.291833 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 29 00:40:04.291833 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 29 00:40:04.291833 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 29 00:40:04.301415 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Oct 29 00:40:04.587534 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 29 00:40:04.940275 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 29 00:40:04.940275 ignition[1039]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 29 00:40:04.942651 ignition[1039]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 00:40:04.945112 ignition[1039]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 00:40:04.945112 ignition[1039]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 29 00:40:04.945112 ignition[1039]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 29 00:40:04.945112 ignition[1039]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 29 00:40:04.945112 ignition[1039]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 29 00:40:04.945112 ignition[1039]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 29 00:40:04.945112 ignition[1039]: INFO : files: files passed Oct 29 00:40:04.945112 ignition[1039]: INFO : Ignition finished successfully Oct 29 00:40:04.947413 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 29 00:40:04.950072 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 29 00:40:04.954163 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 29 00:40:04.973474 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 29 00:40:04.973686 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 29 00:40:04.985718 initrd-setup-root-after-ignition[1074]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 00:40:04.987090 initrd-setup-root-after-ignition[1070]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 00:40:04.987090 initrd-setup-root-after-ignition[1070]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 29 00:40:04.989415 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 29 00:40:04.990506 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 29 00:40:04.993020 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 29 00:40:05.061452 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 29 00:40:05.061683 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 29 00:40:05.063336 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 29 00:40:05.064087 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 29 00:40:05.065402 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 29 00:40:05.067152 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 29 00:40:05.113172 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 29 00:40:05.115758 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 29 00:40:05.140616 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 29 00:40:05.141818 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 29 00:40:05.142432 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 00:40:05.143013 systemd[1]: Stopped target timers.target - Timer Units. Oct 29 00:40:05.143515 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 29 00:40:05.143687 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 29 00:40:05.145008 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 29 00:40:05.145986 systemd[1]: Stopped target basic.target - Basic System. Oct 29 00:40:05.146795 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 29 00:40:05.147742 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 29 00:40:05.148614 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 29 00:40:05.149442 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 29 00:40:05.150299 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 29 00:40:05.151223 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 29 00:40:05.152161 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 29 00:40:05.152968 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 29 00:40:05.153945 systemd[1]: Stopped target swap.target - Swaps. Oct 29 00:40:05.154714 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 29 00:40:05.154888 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 29 00:40:05.156104 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 29 00:40:05.156694 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 00:40:05.157493 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 29 00:40:05.157748 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 00:40:05.158361 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 29 00:40:05.158545 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 29 00:40:05.159596 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 29 00:40:05.159766 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 29 00:40:05.160898 systemd[1]: ignition-files.service: Deactivated successfully. Oct 29 00:40:05.161014 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 29 00:40:05.161783 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 29 00:40:05.161950 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 29 00:40:05.164992 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 29 00:40:05.165907 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 29 00:40:05.166073 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 00:40:05.170297 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 29 00:40:05.171984 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 29 00:40:05.172237 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 00:40:05.174040 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 29 00:40:05.174174 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 00:40:05.175411 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 29 00:40:05.176012 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 29 00:40:05.186421 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 29 00:40:05.186541 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 29 00:40:05.213667 ignition[1094]: INFO : Ignition 2.22.0 Oct 29 00:40:05.216718 ignition[1094]: INFO : Stage: umount Oct 29 00:40:05.216718 ignition[1094]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 00:40:05.216718 ignition[1094]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 29 00:40:05.216718 ignition[1094]: INFO : umount: umount passed Oct 29 00:40:05.216718 ignition[1094]: INFO : Ignition finished successfully Oct 29 00:40:05.217403 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 29 00:40:05.221847 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 29 00:40:05.222526 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 29 00:40:05.223427 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 29 00:40:05.223489 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 29 00:40:05.246485 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 29 00:40:05.246682 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 29 00:40:05.255876 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 29 00:40:05.256001 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 29 00:40:05.259017 systemd[1]: Stopped target network.target - Network. Oct 29 00:40:05.260296 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 29 00:40:05.260426 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 29 00:40:05.262498 systemd[1]: Stopped target paths.target - Path Units. Oct 29 00:40:05.263346 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 29 00:40:05.263414 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 00:40:05.264198 systemd[1]: Stopped target slices.target - Slice Units. Oct 29 00:40:05.265113 systemd[1]: Stopped target sockets.target - Socket Units. Oct 29 00:40:05.265961 systemd[1]: iscsid.socket: Deactivated successfully. Oct 29 00:40:05.266021 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 29 00:40:05.266801 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 29 00:40:05.266902 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 29 00:40:05.267766 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 29 00:40:05.267844 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 29 00:40:05.268629 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 29 00:40:05.268686 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 29 00:40:05.269628 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 29 00:40:05.270536 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 29 00:40:05.272309 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 29 00:40:05.272444 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 29 00:40:05.276515 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 29 00:40:05.276684 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 29 00:40:05.278786 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 29 00:40:05.279002 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 29 00:40:05.284958 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 29 00:40:05.285093 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 29 00:40:05.289220 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 29 00:40:05.289878 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 29 00:40:05.289934 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 29 00:40:05.292244 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 29 00:40:05.294237 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 29 00:40:05.294349 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 29 00:40:05.295356 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 29 00:40:05.295431 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 29 00:40:05.297755 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 29 00:40:05.297844 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 29 00:40:05.298485 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 00:40:05.321212 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 29 00:40:05.321502 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 00:40:05.325631 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 29 00:40:05.325751 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 29 00:40:05.326361 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 29 00:40:05.326405 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 00:40:05.328597 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 29 00:40:05.328679 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 29 00:40:05.329823 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 29 00:40:05.329966 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 29 00:40:05.330516 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 29 00:40:05.330593 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 29 00:40:05.333032 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 29 00:40:05.333574 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 29 00:40:05.333643 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 00:40:05.335352 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 29 00:40:05.335422 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 00:40:05.337255 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 00:40:05.337324 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:40:05.348556 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 29 00:40:05.348705 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 29 00:40:05.354347 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 29 00:40:05.355400 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 29 00:40:05.356845 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 29 00:40:05.359232 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 29 00:40:05.401344 systemd[1]: Switching root. Oct 29 00:40:05.457390 systemd-journald[293]: Journal stopped Oct 29 00:40:06.641517 systemd-journald[293]: Received SIGTERM from PID 1 (systemd). Oct 29 00:40:06.641625 kernel: SELinux: policy capability network_peer_controls=1 Oct 29 00:40:06.641652 kernel: SELinux: policy capability open_perms=1 Oct 29 00:40:06.641669 kernel: SELinux: policy capability extended_socket_class=1 Oct 29 00:40:06.641683 kernel: SELinux: policy capability always_check_network=0 Oct 29 00:40:06.641696 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 29 00:40:06.641716 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 29 00:40:06.641731 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 29 00:40:06.641748 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 29 00:40:06.641765 kernel: SELinux: policy capability userspace_initial_context=0 Oct 29 00:40:06.641780 kernel: audit: type=1403 audit(1761698405.605:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 29 00:40:06.641805 systemd[1]: Successfully loaded SELinux policy in 87.917ms. Oct 29 00:40:06.641831 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.705ms. Oct 29 00:40:06.641848 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 29 00:40:06.643952 systemd[1]: Detected virtualization kvm. Oct 29 00:40:06.643988 systemd[1]: Detected architecture x86-64. Oct 29 00:40:06.644010 systemd[1]: Detected first boot. Oct 29 00:40:06.644040 systemd[1]: Hostname set to . Oct 29 00:40:06.644062 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 29 00:40:06.644084 zram_generator::config[1141]: No configuration found. Oct 29 00:40:06.644108 kernel: Guest personality initialized and is inactive Oct 29 00:40:06.644130 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 29 00:40:06.644155 kernel: Initialized host personality Oct 29 00:40:06.644174 kernel: NET: Registered PF_VSOCK protocol family Oct 29 00:40:06.644195 systemd[1]: Populated /etc with preset unit settings. Oct 29 00:40:06.644216 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 29 00:40:06.644245 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 29 00:40:06.644270 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 29 00:40:06.644293 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 29 00:40:06.644319 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 29 00:40:06.644340 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 29 00:40:06.644361 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 29 00:40:06.644383 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 29 00:40:06.644405 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 29 00:40:06.644428 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 29 00:40:06.644450 systemd[1]: Created slice user.slice - User and Session Slice. Oct 29 00:40:06.644478 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 00:40:06.644501 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 00:40:06.644522 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 29 00:40:06.644543 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 29 00:40:06.644566 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 29 00:40:06.644592 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 29 00:40:06.644622 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 29 00:40:06.644644 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 00:40:06.644667 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 29 00:40:06.644689 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 29 00:40:06.644710 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 29 00:40:06.644735 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 29 00:40:06.644766 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 29 00:40:06.644790 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 00:40:06.644811 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 29 00:40:06.644832 systemd[1]: Reached target slices.target - Slice Units. Oct 29 00:40:06.645890 systemd[1]: Reached target swap.target - Swaps. Oct 29 00:40:06.645922 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 29 00:40:06.645940 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 29 00:40:06.645963 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 29 00:40:06.645977 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 29 00:40:06.645991 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 29 00:40:06.646006 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 00:40:06.646021 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 29 00:40:06.646036 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 29 00:40:06.646050 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 29 00:40:06.646068 systemd[1]: Mounting media.mount - External Media Directory... Oct 29 00:40:06.646082 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:40:06.646097 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 29 00:40:06.646111 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 29 00:40:06.646127 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 29 00:40:06.646142 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 29 00:40:06.646156 systemd[1]: Reached target machines.target - Containers. Oct 29 00:40:06.646174 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 29 00:40:06.646189 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 00:40:06.646203 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 29 00:40:06.646216 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 29 00:40:06.646230 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 00:40:06.646244 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 29 00:40:06.646259 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 00:40:06.646276 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 29 00:40:06.646289 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 00:40:06.646303 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 29 00:40:06.646320 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 29 00:40:06.646333 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 29 00:40:06.646347 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 29 00:40:06.646364 systemd[1]: Stopped systemd-fsck-usr.service. Oct 29 00:40:06.646379 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 00:40:06.646393 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 29 00:40:06.646407 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 29 00:40:06.646421 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 29 00:40:06.646439 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 29 00:40:06.646454 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 29 00:40:06.646468 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 29 00:40:06.646482 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:40:06.646496 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 29 00:40:06.646509 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 29 00:40:06.646527 systemd[1]: Mounted media.mount - External Media Directory. Oct 29 00:40:06.646541 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 29 00:40:06.646555 kernel: ACPI: bus type drm_connector registered Oct 29 00:40:06.646570 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 29 00:40:06.646583 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 29 00:40:06.646599 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 00:40:06.646614 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 29 00:40:06.646632 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 29 00:40:06.646646 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 00:40:06.646664 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 00:40:06.646678 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 00:40:06.646697 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 29 00:40:06.646711 kernel: fuse: init (API version 7.41) Oct 29 00:40:06.646726 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 00:40:06.646741 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 00:40:06.646754 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 29 00:40:06.646768 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 29 00:40:06.646782 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 00:40:06.646798 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 00:40:06.646812 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 29 00:40:06.646828 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 29 00:40:06.646843 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 29 00:40:06.648938 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 29 00:40:06.649007 systemd-journald[1211]: Collecting audit messages is disabled. Oct 29 00:40:06.649051 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 29 00:40:06.649067 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 29 00:40:06.649086 systemd-journald[1211]: Journal started Oct 29 00:40:06.649113 systemd-journald[1211]: Runtime Journal (/run/log/journal/2680a907996e4f068645bf51f075bb64) is 4.9M, max 39.2M, 34.3M free. Oct 29 00:40:06.280794 systemd[1]: Queued start job for default target multi-user.target. Oct 29 00:40:06.295113 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 29 00:40:06.295803 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 29 00:40:06.654911 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 29 00:40:06.661457 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 29 00:40:06.666893 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 00:40:06.674904 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 29 00:40:06.677981 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 00:40:06.682895 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 29 00:40:06.685901 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 00:40:06.696888 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 29 00:40:06.704907 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 29 00:40:06.708964 systemd[1]: Started systemd-journald.service - Journal Service. Oct 29 00:40:06.712495 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 00:40:06.714776 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 29 00:40:06.716199 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 29 00:40:06.750244 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 29 00:40:06.757013 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 29 00:40:06.766789 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 29 00:40:06.767533 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 29 00:40:06.772191 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 29 00:40:06.777431 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 29 00:40:06.784907 kernel: loop1: detected capacity change from 0 to 219144 Oct 29 00:40:06.796244 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 00:40:06.809778 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 29 00:40:06.820598 systemd-journald[1211]: Time spent on flushing to /var/log/journal/2680a907996e4f068645bf51f075bb64 is 31.243ms for 1005 entries. Oct 29 00:40:06.820598 systemd-journald[1211]: System Journal (/var/log/journal/2680a907996e4f068645bf51f075bb64) is 8M, max 163.5M, 155.5M free. Oct 29 00:40:06.860948 systemd-journald[1211]: Received client request to flush runtime journal. Oct 29 00:40:06.861027 kernel: loop2: detected capacity change from 0 to 128048 Oct 29 00:40:06.825426 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 29 00:40:06.829087 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 29 00:40:06.830285 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 29 00:40:06.863950 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 29 00:40:06.897682 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 29 00:40:06.901139 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 29 00:40:06.904896 kernel: loop3: detected capacity change from 0 to 8 Oct 29 00:40:06.905697 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 29 00:40:06.933931 kernel: loop4: detected capacity change from 0 to 110976 Oct 29 00:40:06.940147 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 29 00:40:06.971927 kernel: loop5: detected capacity change from 0 to 219144 Oct 29 00:40:06.984166 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. Oct 29 00:40:06.984306 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. Oct 29 00:40:06.997029 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 00:40:07.002888 kernel: loop6: detected capacity change from 0 to 128048 Oct 29 00:40:07.022890 kernel: loop7: detected capacity change from 0 to 8 Oct 29 00:40:07.029892 kernel: loop1: detected capacity change from 0 to 110976 Oct 29 00:40:07.051373 (sd-merge)[1285]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-digitalocean.raw'. Oct 29 00:40:07.060691 (sd-merge)[1285]: Merged extensions into '/usr'. Oct 29 00:40:07.071104 systemd[1]: Reload requested from client PID 1236 ('systemd-sysext') (unit systemd-sysext.service)... Oct 29 00:40:07.071138 systemd[1]: Reloading... Oct 29 00:40:07.276881 zram_generator::config[1328]: No configuration found. Oct 29 00:40:07.278186 systemd-resolved[1279]: Positive Trust Anchors: Oct 29 00:40:07.278202 systemd-resolved[1279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 00:40:07.278207 systemd-resolved[1279]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 29 00:40:07.278244 systemd-resolved[1279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 29 00:40:07.304652 systemd-resolved[1279]: Using system hostname 'ci-4487.0.0-n-61970e6314'. Oct 29 00:40:07.509919 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 29 00:40:07.510078 systemd[1]: Reloading finished in 438 ms. Oct 29 00:40:07.524181 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 29 00:40:07.525086 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 29 00:40:07.526139 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 29 00:40:07.529396 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 29 00:40:07.539150 systemd[1]: Starting ensure-sysext.service... Oct 29 00:40:07.543176 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 29 00:40:07.592917 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 29 00:40:07.593097 systemd[1]: Reload requested from client PID 1361 ('systemctl') (unit ensure-sysext.service)... Oct 29 00:40:07.593119 systemd[1]: Reloading... Oct 29 00:40:07.595059 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 29 00:40:07.595430 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 29 00:40:07.595888 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 29 00:40:07.600892 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 29 00:40:07.601171 systemd-tmpfiles[1362]: ACLs are not supported, ignoring. Oct 29 00:40:07.601233 systemd-tmpfiles[1362]: ACLs are not supported, ignoring. Oct 29 00:40:07.617479 systemd-tmpfiles[1362]: Detected autofs mount point /boot during canonicalization of boot. Oct 29 00:40:07.618182 systemd-tmpfiles[1362]: Skipping /boot Oct 29 00:40:07.632653 systemd-tmpfiles[1362]: Detected autofs mount point /boot during canonicalization of boot. Oct 29 00:40:07.632844 systemd-tmpfiles[1362]: Skipping /boot Oct 29 00:40:07.684903 zram_generator::config[1389]: No configuration found. Oct 29 00:40:07.913039 systemd[1]: Reloading finished in 319 ms. Oct 29 00:40:07.925789 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 29 00:40:07.941431 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 00:40:07.954036 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 00:40:07.957088 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 29 00:40:07.961393 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 29 00:40:07.970300 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 29 00:40:07.977437 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 00:40:07.982362 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 29 00:40:07.989927 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:40:07.990243 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 00:40:07.996513 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 00:40:08.005625 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 00:40:08.015393 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 00:40:08.016217 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 00:40:08.016396 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 00:40:08.016547 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:40:08.025739 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:40:08.028204 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 00:40:08.028480 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 00:40:08.028615 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 00:40:08.028755 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:40:08.039754 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:40:08.040180 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 00:40:08.046424 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 29 00:40:08.048148 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 00:40:08.048348 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 00:40:08.048545 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:40:08.075430 systemd[1]: Finished ensure-sysext.service. Oct 29 00:40:08.087970 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 29 00:40:08.090642 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 00:40:08.093333 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 00:40:08.095558 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 00:40:08.095825 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 29 00:40:08.115726 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 00:40:08.116335 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 00:40:08.120693 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 00:40:08.131423 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 29 00:40:08.133840 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 00:40:08.135277 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 00:40:08.138476 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 00:40:08.206643 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 29 00:40:08.237472 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 29 00:40:08.241357 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 00:40:08.246592 systemd-udevd[1441]: Using default interface naming scheme 'v257'. Oct 29 00:40:08.257903 augenrules[1477]: No rules Oct 29 00:40:08.261474 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 00:40:08.261840 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 00:40:08.317950 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 29 00:40:08.318812 systemd[1]: Reached target time-set.target - System Time Set. Oct 29 00:40:08.339258 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 00:40:08.347478 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 29 00:40:08.522474 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Oct 29 00:40:08.526238 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Oct 29 00:40:08.529004 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:40:08.529225 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 00:40:08.531056 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 00:40:08.535882 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 00:40:08.549412 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 00:40:08.550396 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 00:40:08.550461 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 00:40:08.550511 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 00:40:08.550540 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:40:08.577020 systemd-networkd[1487]: lo: Link UP Oct 29 00:40:08.577470 systemd-networkd[1487]: lo: Gained carrier Oct 29 00:40:08.581813 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 29 00:40:08.582655 systemd[1]: Reached target network.target - Network. Oct 29 00:40:08.593419 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 29 00:40:08.600017 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 29 00:40:08.609714 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 00:40:08.611543 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 00:40:08.659023 kernel: ISO 9660 Extensions: RRIP_1991A Oct 29 00:40:08.665281 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Oct 29 00:40:08.680924 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 29 00:40:08.688588 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 29 00:40:08.693752 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 00:40:08.696686 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 00:40:08.699342 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 00:40:08.699956 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 00:40:08.703789 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 00:40:08.706792 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 00:40:08.768265 systemd-networkd[1487]: eth1: Configuring with /run/systemd/network/10-c6:8a:2a:12:90:31.network. Oct 29 00:40:08.773540 systemd-networkd[1487]: eth0: Configuring with /run/systemd/network/10-3a:7e:90:a8:3b:bb.network. Oct 29 00:40:08.786532 systemd-networkd[1487]: eth1: Link UP Oct 29 00:40:08.787651 systemd-networkd[1487]: eth1: Gained carrier Oct 29 00:40:08.792370 kernel: mousedev: PS/2 mouse device common for all mice Oct 29 00:40:08.796076 systemd-networkd[1487]: eth0: Link UP Oct 29 00:40:08.800819 systemd-networkd[1487]: eth0: Gained carrier Oct 29 00:40:08.810276 systemd-timesyncd[1454]: Network configuration changed, trying to establish connection. Oct 29 00:40:08.812920 systemd-timesyncd[1454]: Network configuration changed, trying to establish connection. Oct 29 00:40:08.888590 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 29 00:40:08.896888 kernel: ACPI: button: Power Button [PWRF] Oct 29 00:40:08.941899 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 29 00:40:08.962122 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 29 00:40:09.021212 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 29 00:40:09.025125 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 29 00:40:09.045844 ldconfig[1439]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 29 00:40:09.050988 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 29 00:40:09.059272 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 29 00:40:09.077795 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 29 00:40:09.099057 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 29 00:40:09.101341 systemd[1]: Reached target sysinit.target - System Initialization. Oct 29 00:40:09.103912 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 29 00:40:09.104725 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 29 00:40:09.106359 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 29 00:40:09.108242 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 29 00:40:09.108981 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 29 00:40:09.111025 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 29 00:40:09.111591 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 29 00:40:09.111633 systemd[1]: Reached target paths.target - Path Units. Oct 29 00:40:09.112993 systemd[1]: Reached target timers.target - Timer Units. Oct 29 00:40:09.115450 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 29 00:40:09.121202 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 29 00:40:09.129635 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 29 00:40:09.130598 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 29 00:40:09.131304 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 29 00:40:09.141936 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 29 00:40:09.143051 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 29 00:40:09.144898 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 29 00:40:09.149243 systemd[1]: Reached target sockets.target - Socket Units. Oct 29 00:40:09.149808 systemd[1]: Reached target basic.target - Basic System. Oct 29 00:40:09.150472 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 29 00:40:09.150507 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 29 00:40:09.153654 systemd[1]: Starting containerd.service - containerd container runtime... Oct 29 00:40:09.158302 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 29 00:40:09.164233 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 29 00:40:09.176250 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 29 00:40:09.181321 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 29 00:40:09.186737 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 29 00:40:09.188011 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 29 00:40:09.194950 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 29 00:40:09.204834 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 29 00:40:09.205223 jq[1554]: false Oct 29 00:40:09.212582 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 29 00:40:09.219257 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 29 00:40:09.227281 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 29 00:40:09.239340 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 29 00:40:09.241057 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 29 00:40:09.241702 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 29 00:40:09.242587 systemd[1]: Starting update-engine.service - Update Engine... Oct 29 00:40:09.247224 coreos-metadata[1551]: Oct 29 00:40:09.247 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 29 00:40:09.247515 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 29 00:40:09.257356 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 29 00:40:09.258619 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 29 00:40:09.260033 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 29 00:40:09.274371 coreos-metadata[1551]: Oct 29 00:40:09.274 INFO Fetch successful Oct 29 00:40:09.307655 extend-filesystems[1555]: Found /dev/vda6 Oct 29 00:40:09.307261 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 29 00:40:09.307522 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 29 00:40:09.315430 extend-filesystems[1555]: Found /dev/vda9 Oct 29 00:40:09.320654 extend-filesystems[1555]: Checking size of /dev/vda9 Oct 29 00:40:09.337913 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Refreshing passwd entry cache Oct 29 00:40:09.336077 oslogin_cache_refresh[1556]: Refreshing passwd entry cache Oct 29 00:40:09.361241 systemd[1]: motdgen.service: Deactivated successfully. Oct 29 00:40:09.361669 update_engine[1564]: I20251029 00:40:09.361572 1564 main.cc:92] Flatcar Update Engine starting Oct 29 00:40:09.362568 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Failure getting users, quitting Oct 29 00:40:09.362568 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 29 00:40:09.362568 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Refreshing group entry cache Oct 29 00:40:09.362248 oslogin_cache_refresh[1556]: Failure getting users, quitting Oct 29 00:40:09.362272 oslogin_cache_refresh[1556]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 29 00:40:09.362330 oslogin_cache_refresh[1556]: Refreshing group entry cache Oct 29 00:40:09.363556 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Failure getting groups, quitting Oct 29 00:40:09.363556 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 29 00:40:09.363548 oslogin_cache_refresh[1556]: Failure getting groups, quitting Oct 29 00:40:09.363562 oslogin_cache_refresh[1556]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 29 00:40:09.369042 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 29 00:40:09.377275 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 29 00:40:09.377509 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 29 00:40:09.380424 (ntainerd)[1577]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 29 00:40:09.395902 jq[1565]: true Oct 29 00:40:09.407091 extend-filesystems[1555]: Resized partition /dev/vda9 Oct 29 00:40:09.415220 extend-filesystems[1601]: resize2fs 1.47.3 (8-Jul-2025) Oct 29 00:40:09.434408 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 14138363 blocks Oct 29 00:40:09.438688 dbus-daemon[1552]: [system] SELinux support is enabled Oct 29 00:40:09.439113 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 29 00:40:09.443931 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 29 00:40:09.444436 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 29 00:40:09.445139 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 29 00:40:09.445241 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Oct 29 00:40:09.445259 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 29 00:40:09.466234 tar[1567]: linux-amd64/LICENSE Oct 29 00:40:09.466234 tar[1567]: linux-amd64/helm Oct 29 00:40:09.479136 systemd[1]: Started update-engine.service - Update Engine. Oct 29 00:40:09.482649 update_engine[1564]: I20251029 00:40:09.482414 1564 update_check_scheduler.cc:74] Next update check in 3m22s Oct 29 00:40:09.485030 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 29 00:40:09.503956 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 00:40:09.557341 jq[1605]: true Oct 29 00:40:09.505195 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 29 00:40:09.506302 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 29 00:40:09.678038 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Oct 29 00:40:09.698754 extend-filesystems[1601]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 29 00:40:09.698754 extend-filesystems[1601]: old_desc_blocks = 1, new_desc_blocks = 7 Oct 29 00:40:09.698754 extend-filesystems[1601]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Oct 29 00:40:09.701637 extend-filesystems[1555]: Resized filesystem in /dev/vda9 Oct 29 00:40:09.699299 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 29 00:40:09.700129 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 29 00:40:09.746498 bash[1638]: Updated "/home/core/.ssh/authorized_keys" Oct 29 00:40:09.826886 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Oct 29 00:40:09.829880 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Oct 29 00:40:09.873178 containerd[1577]: time="2025-10-29T00:40:09Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 29 00:40:09.875884 containerd[1577]: time="2025-10-29T00:40:09.874089296Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 29 00:40:09.904369 containerd[1577]: time="2025-10-29T00:40:09.904311745Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.108µs" Oct 29 00:40:09.904369 containerd[1577]: time="2025-10-29T00:40:09.904354136Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 29 00:40:09.904369 containerd[1577]: time="2025-10-29T00:40:09.904372541Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 29 00:40:09.904586 containerd[1577]: time="2025-10-29T00:40:09.904569422Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 29 00:40:09.904613 containerd[1577]: time="2025-10-29T00:40:09.904590257Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 29 00:40:09.904659 containerd[1577]: time="2025-10-29T00:40:09.904621298Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 29 00:40:09.904700 containerd[1577]: time="2025-10-29T00:40:09.904683583Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 29 00:40:09.904724 containerd[1577]: time="2025-10-29T00:40:09.904698491Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 29 00:40:09.905148 containerd[1577]: time="2025-10-29T00:40:09.905108290Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 29 00:40:09.905148 containerd[1577]: time="2025-10-29T00:40:09.905129784Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 29 00:40:09.905148 containerd[1577]: time="2025-10-29T00:40:09.905141245Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 29 00:40:09.905148 containerd[1577]: time="2025-10-29T00:40:09.905150703Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 29 00:40:09.905302 containerd[1577]: time="2025-10-29T00:40:09.905242216Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 29 00:40:09.905521 containerd[1577]: time="2025-10-29T00:40:09.905478090Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 29 00:40:09.905570 containerd[1577]: time="2025-10-29T00:40:09.905551938Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 29 00:40:09.905600 containerd[1577]: time="2025-10-29T00:40:09.905570341Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 29 00:40:09.905623 containerd[1577]: time="2025-10-29T00:40:09.905613007Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 29 00:40:09.908391 containerd[1577]: time="2025-10-29T00:40:09.905848981Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 29 00:40:09.908391 containerd[1577]: time="2025-10-29T00:40:09.908157189Z" level=info msg="metadata content store policy set" policy=shared Oct 29 00:40:09.911165 containerd[1577]: time="2025-10-29T00:40:09.911121537Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 29 00:40:09.912490 containerd[1577]: time="2025-10-29T00:40:09.911323721Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 29 00:40:09.912490 containerd[1577]: time="2025-10-29T00:40:09.911346505Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 29 00:40:09.912490 containerd[1577]: time="2025-10-29T00:40:09.911411447Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 29 00:40:09.912490 containerd[1577]: time="2025-10-29T00:40:09.911432738Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 29 00:40:09.912490 containerd[1577]: time="2025-10-29T00:40:09.911444145Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 29 00:40:09.912490 containerd[1577]: time="2025-10-29T00:40:09.911456449Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 29 00:40:09.912490 containerd[1577]: time="2025-10-29T00:40:09.911468809Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 29 00:40:09.912490 containerd[1577]: time="2025-10-29T00:40:09.911480585Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 29 00:40:09.912490 containerd[1577]: time="2025-10-29T00:40:09.911491904Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 29 00:40:09.912490 containerd[1577]: time="2025-10-29T00:40:09.911502796Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 29 00:40:09.912490 containerd[1577]: time="2025-10-29T00:40:09.911514758Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 29 00:40:09.912490 containerd[1577]: time="2025-10-29T00:40:09.911673242Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 29 00:40:09.912490 containerd[1577]: time="2025-10-29T00:40:09.911693002Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 29 00:40:09.912490 containerd[1577]: time="2025-10-29T00:40:09.911708070Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 29 00:40:09.912873 containerd[1577]: time="2025-10-29T00:40:09.911719470Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 29 00:40:09.912873 containerd[1577]: time="2025-10-29T00:40:09.911731150Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 29 00:40:09.912873 containerd[1577]: time="2025-10-29T00:40:09.911741936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 29 00:40:09.912873 containerd[1577]: time="2025-10-29T00:40:09.911771527Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 29 00:40:09.912873 containerd[1577]: time="2025-10-29T00:40:09.911787055Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 29 00:40:09.912873 containerd[1577]: time="2025-10-29T00:40:09.911799507Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 29 00:40:09.912873 containerd[1577]: time="2025-10-29T00:40:09.911809800Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 29 00:40:09.912873 containerd[1577]: time="2025-10-29T00:40:09.911821828Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 29 00:40:09.912873 containerd[1577]: time="2025-10-29T00:40:09.911906556Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 29 00:40:09.912873 containerd[1577]: time="2025-10-29T00:40:09.911921883Z" level=info msg="Start snapshots syncer" Oct 29 00:40:09.912873 containerd[1577]: time="2025-10-29T00:40:09.911951014Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 29 00:40:09.913099 containerd[1577]: time="2025-10-29T00:40:09.912201072Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 29 00:40:09.913099 containerd[1577]: time="2025-10-29T00:40:09.912261470Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 29 00:40:09.924464 containerd[1577]: time="2025-10-29T00:40:09.918406761Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 29 00:40:09.924464 containerd[1577]: time="2025-10-29T00:40:09.923470486Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 29 00:40:09.924464 containerd[1577]: time="2025-10-29T00:40:09.923514869Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 29 00:40:09.924464 containerd[1577]: time="2025-10-29T00:40:09.923527591Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 29 00:40:09.924464 containerd[1577]: time="2025-10-29T00:40:09.923538228Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 29 00:40:09.924464 containerd[1577]: time="2025-10-29T00:40:09.923550908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 29 00:40:09.924464 containerd[1577]: time="2025-10-29T00:40:09.923587405Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 29 00:40:09.924464 containerd[1577]: time="2025-10-29T00:40:09.923602289Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 29 00:40:09.924464 containerd[1577]: time="2025-10-29T00:40:09.923659889Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 29 00:40:09.924464 containerd[1577]: time="2025-10-29T00:40:09.923673962Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 29 00:40:09.924464 containerd[1577]: time="2025-10-29T00:40:09.923685593Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 29 00:40:09.924464 containerd[1577]: time="2025-10-29T00:40:09.923742962Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 29 00:40:09.924464 containerd[1577]: time="2025-10-29T00:40:09.923760346Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 29 00:40:09.924464 containerd[1577]: time="2025-10-29T00:40:09.923771197Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 29 00:40:09.924882 containerd[1577]: time="2025-10-29T00:40:09.923782062Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 29 00:40:09.924882 containerd[1577]: time="2025-10-29T00:40:09.923789686Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 29 00:40:09.924882 containerd[1577]: time="2025-10-29T00:40:09.923798501Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 29 00:40:09.924882 containerd[1577]: time="2025-10-29T00:40:09.923808281Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 29 00:40:09.924882 containerd[1577]: time="2025-10-29T00:40:09.923826171Z" level=info msg="runtime interface created" Oct 29 00:40:09.924882 containerd[1577]: time="2025-10-29T00:40:09.923831227Z" level=info msg="created NRI interface" Oct 29 00:40:09.924882 containerd[1577]: time="2025-10-29T00:40:09.923839016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 29 00:40:09.927204 containerd[1577]: time="2025-10-29T00:40:09.926099295Z" level=info msg="Connect containerd service" Oct 29 00:40:09.927204 containerd[1577]: time="2025-10-29T00:40:09.926198296Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 29 00:40:09.932704 containerd[1577]: time="2025-10-29T00:40:09.932495891Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 29 00:40:10.000279 systemd-logind[1563]: New seat seat0. Oct 29 00:40:10.040617 sshd_keygen[1604]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 29 00:40:10.046918 kernel: Console: switching to colour dummy device 80x25 Oct 29 00:40:10.053817 systemd-logind[1563]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 29 00:40:10.053956 systemd-logind[1563]: Watching system buttons on /dev/input/event2 (Power Button) Oct 29 00:40:10.057035 systemd-vconsole-setup[1612]: KD_FONT_OP_SET failed, fonts will not be copied to tty6: Function not implemented Oct 29 00:40:10.059246 systemd[1]: Started systemd-logind.service - User Login Management. Oct 29 00:40:10.059675 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 29 00:40:10.065335 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:40:10.101785 systemd[1]: Starting sshkeys.service... Oct 29 00:40:10.109353 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 29 00:40:10.128837 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 29 00:40:10.136609 systemd-networkd[1487]: eth1: Gained IPv6LL Oct 29 00:40:10.140682 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 29 00:40:10.141993 systemd-timesyncd[1454]: Network configuration changed, trying to establish connection. Oct 29 00:40:10.147002 systemd[1]: Reached target network-online.target - Network is Online. Oct 29 00:40:10.151892 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 29 00:40:10.151983 kernel: [drm] features: -context_init Oct 29 00:40:10.151399 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:40:10.160250 kernel: [drm] number of scanouts: 1 Oct 29 00:40:10.160346 kernel: [drm] number of cap sets: 0 Oct 29 00:40:10.158646 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 29 00:40:10.183527 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 29 00:40:10.188064 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 29 00:40:10.216768 containerd[1577]: time="2025-10-29T00:40:10.216000663Z" level=info msg="Start subscribing containerd event" Oct 29 00:40:10.216768 containerd[1577]: time="2025-10-29T00:40:10.216056521Z" level=info msg="Start recovering state" Oct 29 00:40:10.216768 containerd[1577]: time="2025-10-29T00:40:10.216194720Z" level=info msg="Start event monitor" Oct 29 00:40:10.216768 containerd[1577]: time="2025-10-29T00:40:10.216210346Z" level=info msg="Start cni network conf syncer for default" Oct 29 00:40:10.216768 containerd[1577]: time="2025-10-29T00:40:10.216225251Z" level=info msg="Start streaming server" Oct 29 00:40:10.216768 containerd[1577]: time="2025-10-29T00:40:10.216236138Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 29 00:40:10.216768 containerd[1577]: time="2025-10-29T00:40:10.216245019Z" level=info msg="runtime interface starting up..." Oct 29 00:40:10.216768 containerd[1577]: time="2025-10-29T00:40:10.216251075Z" level=info msg="starting plugins..." Oct 29 00:40:10.216768 containerd[1577]: time="2025-10-29T00:40:10.216266302Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 29 00:40:10.218881 containerd[1577]: time="2025-10-29T00:40:10.217697667Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 29 00:40:10.218881 containerd[1577]: time="2025-10-29T00:40:10.217770627Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 29 00:40:10.219404 containerd[1577]: time="2025-10-29T00:40:10.219303553Z" level=info msg="containerd successfully booted in 0.349536s" Oct 29 00:40:10.237010 locksmithd[1611]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 29 00:40:10.251082 systemd[1]: Started containerd.service - containerd container runtime. Oct 29 00:40:10.253072 systemd[1]: issuegen.service: Deactivated successfully. Oct 29 00:40:10.254905 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Oct 29 00:40:10.256184 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 29 00:40:10.265127 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Oct 29 00:40:10.265220 kernel: Console: switching to colour frame buffer device 128x48 Oct 29 00:40:10.272887 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 29 00:40:10.295064 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 29 00:40:10.299028 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 00:40:10.299494 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:40:10.299805 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 00:40:10.303388 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 00:40:10.340786 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 00:40:10.341114 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:40:10.348257 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 00:40:10.361129 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 29 00:40:10.368418 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 29 00:40:10.370717 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 29 00:40:10.371292 systemd[1]: Reached target getty.target - Login Prompts. Oct 29 00:40:10.393599 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 29 00:40:10.415328 kernel: EDAC MC: Ver: 3.0.0 Oct 29 00:40:10.440615 coreos-metadata[1676]: Oct 29 00:40:10.440 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 29 00:40:10.451560 systemd-networkd[1487]: eth0: Gained IPv6LL Oct 29 00:40:10.452031 systemd-timesyncd[1454]: Network configuration changed, trying to establish connection. Oct 29 00:40:10.455354 coreos-metadata[1676]: Oct 29 00:40:10.455 INFO Fetch successful Oct 29 00:40:10.465301 unknown[1676]: wrote ssh authorized keys file for user: core Oct 29 00:40:10.490720 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:40:10.508363 update-ssh-keys[1705]: Updated "/home/core/.ssh/authorized_keys" Oct 29 00:40:10.509633 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 29 00:40:10.514228 systemd[1]: Finished sshkeys.service. Oct 29 00:40:10.705193 tar[1567]: linux-amd64/README.md Oct 29 00:40:10.726661 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 29 00:40:11.508940 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:40:11.510681 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 29 00:40:11.514416 systemd[1]: Startup finished in 2.525s (kernel) + 5.795s (initrd) + 5.994s (userspace) = 14.315s. Oct 29 00:40:11.518405 (kubelet)[1718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 00:40:12.126641 kubelet[1718]: E1029 00:40:12.126555 1718 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 00:40:12.130383 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 00:40:12.130582 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 00:40:12.131503 systemd[1]: kubelet.service: Consumed 1.192s CPU time, 258M memory peak. Oct 29 00:40:12.160524 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 29 00:40:12.162384 systemd[1]: Started sshd@0-64.23.202.85:22-139.178.89.65:52392.service - OpenSSH per-connection server daemon (139.178.89.65:52392). Oct 29 00:40:12.281785 sshd[1730]: Accepted publickey for core from 139.178.89.65 port 52392 ssh2: RSA SHA256:smEgY84YGI1KYS6vItm95Ji1wpEjlZC/2OCB3dco40g Oct 29 00:40:12.283836 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:40:12.295056 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 29 00:40:12.296775 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 29 00:40:12.311121 systemd-logind[1563]: New session 1 of user core. Oct 29 00:40:12.331526 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 29 00:40:12.337389 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 29 00:40:12.362774 (systemd)[1735]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:40:12.366635 systemd-logind[1563]: New session c1 of user core. Oct 29 00:40:12.606813 systemd[1735]: Queued start job for default target default.target. Oct 29 00:40:12.629905 systemd[1735]: Created slice app.slice - User Application Slice. Oct 29 00:40:12.629991 systemd[1735]: Reached target paths.target - Paths. Oct 29 00:40:12.630101 systemd[1735]: Reached target timers.target - Timers. Oct 29 00:40:12.632454 systemd[1735]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 29 00:40:12.648345 systemd[1735]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 29 00:40:12.648547 systemd[1735]: Reached target sockets.target - Sockets. Oct 29 00:40:12.648641 systemd[1735]: Reached target basic.target - Basic System. Oct 29 00:40:12.648705 systemd[1735]: Reached target default.target - Main User Target. Oct 29 00:40:12.648753 systemd[1735]: Startup finished in 267ms. Oct 29 00:40:12.648822 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 29 00:40:12.659191 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 29 00:40:12.730252 systemd[1]: Started sshd@1-64.23.202.85:22-139.178.89.65:52398.service - OpenSSH per-connection server daemon (139.178.89.65:52398). Oct 29 00:40:12.795732 sshd[1746]: Accepted publickey for core from 139.178.89.65 port 52398 ssh2: RSA SHA256:smEgY84YGI1KYS6vItm95Ji1wpEjlZC/2OCB3dco40g Oct 29 00:40:12.797993 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:40:12.805121 systemd-logind[1563]: New session 2 of user core. Oct 29 00:40:12.816175 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 29 00:40:12.891983 sshd[1749]: Connection closed by 139.178.89.65 port 52398 Oct 29 00:40:12.891745 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Oct 29 00:40:12.909506 systemd[1]: sshd@1-64.23.202.85:22-139.178.89.65:52398.service: Deactivated successfully. Oct 29 00:40:12.912269 systemd[1]: session-2.scope: Deactivated successfully. Oct 29 00:40:12.913729 systemd-logind[1563]: Session 2 logged out. Waiting for processes to exit. Oct 29 00:40:12.918133 systemd[1]: Started sshd@2-64.23.202.85:22-139.178.89.65:52412.service - OpenSSH per-connection server daemon (139.178.89.65:52412). Oct 29 00:40:12.919652 systemd-logind[1563]: Removed session 2. Oct 29 00:40:12.992135 sshd[1755]: Accepted publickey for core from 139.178.89.65 port 52412 ssh2: RSA SHA256:smEgY84YGI1KYS6vItm95Ji1wpEjlZC/2OCB3dco40g Oct 29 00:40:12.994547 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:40:13.003572 systemd-logind[1563]: New session 3 of user core. Oct 29 00:40:13.014394 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 29 00:40:13.073402 sshd[1758]: Connection closed by 139.178.89.65 port 52412 Oct 29 00:40:13.073254 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Oct 29 00:40:13.088204 systemd[1]: sshd@2-64.23.202.85:22-139.178.89.65:52412.service: Deactivated successfully. Oct 29 00:40:13.091239 systemd[1]: session-3.scope: Deactivated successfully. Oct 29 00:40:13.092410 systemd-logind[1563]: Session 3 logged out. Waiting for processes to exit. Oct 29 00:40:13.095654 systemd-logind[1563]: Removed session 3. Oct 29 00:40:13.098212 systemd[1]: Started sshd@3-64.23.202.85:22-139.178.89.65:52428.service - OpenSSH per-connection server daemon (139.178.89.65:52428). Oct 29 00:40:13.163896 sshd[1764]: Accepted publickey for core from 139.178.89.65 port 52428 ssh2: RSA SHA256:smEgY84YGI1KYS6vItm95Ji1wpEjlZC/2OCB3dco40g Oct 29 00:40:13.166613 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:40:13.172405 systemd-logind[1563]: New session 4 of user core. Oct 29 00:40:13.183226 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 29 00:40:13.244011 sshd[1767]: Connection closed by 139.178.89.65 port 52428 Oct 29 00:40:13.245004 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Oct 29 00:40:13.261187 systemd[1]: sshd@3-64.23.202.85:22-139.178.89.65:52428.service: Deactivated successfully. Oct 29 00:40:13.263595 systemd[1]: session-4.scope: Deactivated successfully. Oct 29 00:40:13.265347 systemd-logind[1563]: Session 4 logged out. Waiting for processes to exit. Oct 29 00:40:13.269091 systemd[1]: Started sshd@4-64.23.202.85:22-139.178.89.65:52430.service - OpenSSH per-connection server daemon (139.178.89.65:52430). Oct 29 00:40:13.270069 systemd-logind[1563]: Removed session 4. Oct 29 00:40:13.333914 sshd[1773]: Accepted publickey for core from 139.178.89.65 port 52430 ssh2: RSA SHA256:smEgY84YGI1KYS6vItm95Ji1wpEjlZC/2OCB3dco40g Oct 29 00:40:13.335984 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:40:13.341362 systemd-logind[1563]: New session 5 of user core. Oct 29 00:40:13.352207 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 29 00:40:13.427734 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 29 00:40:13.428108 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 00:40:13.443591 sudo[1777]: pam_unix(sudo:session): session closed for user root Oct 29 00:40:13.447632 sshd[1776]: Connection closed by 139.178.89.65 port 52430 Oct 29 00:40:13.447454 sshd-session[1773]: pam_unix(sshd:session): session closed for user core Oct 29 00:40:13.463560 systemd[1]: sshd@4-64.23.202.85:22-139.178.89.65:52430.service: Deactivated successfully. Oct 29 00:40:13.466526 systemd[1]: session-5.scope: Deactivated successfully. Oct 29 00:40:13.467944 systemd-logind[1563]: Session 5 logged out. Waiting for processes to exit. Oct 29 00:40:13.470754 systemd-logind[1563]: Removed session 5. Oct 29 00:40:13.473050 systemd[1]: Started sshd@5-64.23.202.85:22-139.178.89.65:52446.service - OpenSSH per-connection server daemon (139.178.89.65:52446). Oct 29 00:40:13.541337 sshd[1783]: Accepted publickey for core from 139.178.89.65 port 52446 ssh2: RSA SHA256:smEgY84YGI1KYS6vItm95Ji1wpEjlZC/2OCB3dco40g Oct 29 00:40:13.542868 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:40:13.549376 systemd-logind[1563]: New session 6 of user core. Oct 29 00:40:13.556183 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 29 00:40:13.617043 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 29 00:40:13.617374 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 00:40:13.622895 sudo[1788]: pam_unix(sudo:session): session closed for user root Oct 29 00:40:13.631278 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 29 00:40:13.631566 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 00:40:13.644146 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 00:40:13.693449 augenrules[1810]: No rules Oct 29 00:40:13.694485 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 00:40:13.695201 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 00:40:13.697631 sudo[1787]: pam_unix(sudo:session): session closed for user root Oct 29 00:40:13.701526 sshd[1786]: Connection closed by 139.178.89.65 port 52446 Oct 29 00:40:13.702255 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Oct 29 00:40:13.715606 systemd[1]: sshd@5-64.23.202.85:22-139.178.89.65:52446.service: Deactivated successfully. Oct 29 00:40:13.717848 systemd[1]: session-6.scope: Deactivated successfully. Oct 29 00:40:13.719593 systemd-logind[1563]: Session 6 logged out. Waiting for processes to exit. Oct 29 00:40:13.722400 systemd[1]: Started sshd@6-64.23.202.85:22-139.178.89.65:52456.service - OpenSSH per-connection server daemon (139.178.89.65:52456). Oct 29 00:40:13.724115 systemd-logind[1563]: Removed session 6. Oct 29 00:40:13.800566 sshd[1819]: Accepted publickey for core from 139.178.89.65 port 52456 ssh2: RSA SHA256:smEgY84YGI1KYS6vItm95Ji1wpEjlZC/2OCB3dco40g Oct 29 00:40:13.802028 sshd-session[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:40:13.807637 systemd-logind[1563]: New session 7 of user core. Oct 29 00:40:13.818236 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 29 00:40:13.881552 sudo[1823]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 29 00:40:13.882507 sudo[1823]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 00:40:14.396182 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 29 00:40:14.412782 (dockerd)[1840]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 29 00:40:14.812306 dockerd[1840]: time="2025-10-29T00:40:14.810815778Z" level=info msg="Starting up" Oct 29 00:40:14.815235 dockerd[1840]: time="2025-10-29T00:40:14.815178674Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 29 00:40:14.833187 dockerd[1840]: time="2025-10-29T00:40:14.833125340Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 29 00:40:14.854181 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2699037703-merged.mount: Deactivated successfully. Oct 29 00:40:14.909355 systemd[1]: var-lib-docker-metacopy\x2dcheck2443827671-merged.mount: Deactivated successfully. Oct 29 00:40:14.944944 dockerd[1840]: time="2025-10-29T00:40:14.944423980Z" level=info msg="Loading containers: start." Oct 29 00:40:14.958937 kernel: Initializing XFRM netlink socket Oct 29 00:40:15.269441 systemd-timesyncd[1454]: Network configuration changed, trying to establish connection. Oct 29 00:40:15.284202 systemd-timesyncd[1454]: Network configuration changed, trying to establish connection. Oct 29 00:40:15.339368 systemd-networkd[1487]: docker0: Link UP Oct 29 00:40:15.340688 systemd-timesyncd[1454]: Network configuration changed, trying to establish connection. Oct 29 00:40:15.343289 dockerd[1840]: time="2025-10-29T00:40:15.343072860Z" level=info msg="Loading containers: done." Oct 29 00:40:15.365819 dockerd[1840]: time="2025-10-29T00:40:15.365236262Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 29 00:40:15.365819 dockerd[1840]: time="2025-10-29T00:40:15.365354776Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 29 00:40:15.365819 dockerd[1840]: time="2025-10-29T00:40:15.365450330Z" level=info msg="Initializing buildkit" Oct 29 00:40:15.389235 dockerd[1840]: time="2025-10-29T00:40:15.389176291Z" level=info msg="Completed buildkit initialization" Oct 29 00:40:15.403890 dockerd[1840]: time="2025-10-29T00:40:15.403319911Z" level=info msg="Daemon has completed initialization" Oct 29 00:40:15.403890 dockerd[1840]: time="2025-10-29T00:40:15.403437540Z" level=info msg="API listen on /run/docker.sock" Oct 29 00:40:15.404278 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 29 00:40:15.851786 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1584960632-merged.mount: Deactivated successfully. Oct 29 00:40:16.199107 containerd[1577]: time="2025-10-29T00:40:16.198974522Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 29 00:40:16.842659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount98862243.mount: Deactivated successfully. Oct 29 00:40:17.990335 containerd[1577]: time="2025-10-29T00:40:17.989891788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:40:17.991775 containerd[1577]: time="2025-10-29T00:40:17.991723695Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Oct 29 00:40:17.992406 containerd[1577]: time="2025-10-29T00:40:17.992363276Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:40:17.998397 containerd[1577]: time="2025-10-29T00:40:17.998325868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:40:18.000880 containerd[1577]: time="2025-10-29T00:40:18.000236739Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.801213017s" Oct 29 00:40:18.000880 containerd[1577]: time="2025-10-29T00:40:18.000313179Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Oct 29 00:40:18.003687 containerd[1577]: time="2025-10-29T00:40:18.003008520Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 29 00:40:19.327006 containerd[1577]: time="2025-10-29T00:40:19.326254801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:40:19.328930 containerd[1577]: time="2025-10-29T00:40:19.328841564Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Oct 29 00:40:19.329108 containerd[1577]: time="2025-10-29T00:40:19.329067620Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:40:19.332945 containerd[1577]: time="2025-10-29T00:40:19.332842135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:40:19.335505 containerd[1577]: time="2025-10-29T00:40:19.335311372Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.33223957s" Oct 29 00:40:19.335505 containerd[1577]: time="2025-10-29T00:40:19.335371856Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Oct 29 00:40:19.336094 containerd[1577]: time="2025-10-29T00:40:19.336046553Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 29 00:40:20.321902 containerd[1577]: time="2025-10-29T00:40:20.321333582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:40:20.324350 containerd[1577]: time="2025-10-29T00:40:20.324277718Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Oct 29 00:40:20.324963 containerd[1577]: time="2025-10-29T00:40:20.324660195Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:40:20.328587 containerd[1577]: time="2025-10-29T00:40:20.328521425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:40:20.332177 containerd[1577]: time="2025-10-29T00:40:20.332112365Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 995.854666ms" Oct 29 00:40:20.332614 containerd[1577]: time="2025-10-29T00:40:20.332398487Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Oct 29 00:40:20.333608 containerd[1577]: time="2025-10-29T00:40:20.333566068Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 29 00:40:21.562583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2979655745.mount: Deactivated successfully. Oct 29 00:40:22.008917 containerd[1577]: time="2025-10-29T00:40:22.008340401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:40:22.010119 containerd[1577]: time="2025-10-29T00:40:22.010074221Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Oct 29 00:40:22.010640 containerd[1577]: time="2025-10-29T00:40:22.010603015Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:40:22.013088 containerd[1577]: time="2025-10-29T00:40:22.013043522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:40:22.014179 containerd[1577]: time="2025-10-29T00:40:22.014131512Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.680390674s" Oct 29 00:40:22.014179 containerd[1577]: time="2025-10-29T00:40:22.014176790Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Oct 29 00:40:22.014777 containerd[1577]: time="2025-10-29T00:40:22.014742791Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 29 00:40:22.377536 systemd-resolved[1279]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Oct 29 00:40:22.381203 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 29 00:40:22.384184 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:40:22.519737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1633552509.mount: Deactivated successfully. Oct 29 00:40:22.638183 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:40:22.660447 (kubelet)[2157]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 00:40:22.792492 kubelet[2157]: E1029 00:40:22.792418 2157 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 00:40:22.801499 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 00:40:22.801711 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 00:40:22.804067 systemd[1]: kubelet.service: Consumed 271ms CPU time, 110.5M memory peak. Oct 29 00:40:23.697995 containerd[1577]: time="2025-10-29T00:40:23.697915800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:40:23.700021 containerd[1577]: time="2025-10-29T00:40:23.699956632Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Oct 29 00:40:23.700884 containerd[1577]: time="2025-10-29T00:40:23.700813472Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:40:23.704575 containerd[1577]: time="2025-10-29T00:40:23.704499950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:40:23.705910 containerd[1577]: time="2025-10-29T00:40:23.705415052Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.690638142s" Oct 29 00:40:23.705910 containerd[1577]: time="2025-10-29T00:40:23.705463966Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Oct 29 00:40:23.706740 containerd[1577]: time="2025-10-29T00:40:23.706199554Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 29 00:40:24.101681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2176507315.mount: Deactivated successfully. Oct 29 00:40:24.108994 containerd[1577]: time="2025-10-29T00:40:24.108051833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:40:24.108994 containerd[1577]: time="2025-10-29T00:40:24.108944482Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Oct 29 00:40:24.109305 containerd[1577]: time="2025-10-29T00:40:24.109284216Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:40:24.111154 containerd[1577]: time="2025-10-29T00:40:24.111106968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:40:24.111902 containerd[1577]: time="2025-10-29T00:40:24.111843767Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 405.610914ms" Oct 29 00:40:24.111987 containerd[1577]: time="2025-10-29T00:40:24.111909170Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Oct 29 00:40:24.112883 containerd[1577]: time="2025-10-29T00:40:24.112835491Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 29 00:40:25.428102 systemd-resolved[1279]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Oct 29 00:40:26.984947 containerd[1577]: time="2025-10-29T00:40:26.984847970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:40:26.986602 containerd[1577]: time="2025-10-29T00:40:26.986444089Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Oct 29 00:40:26.987917 containerd[1577]: time="2025-10-29T00:40:26.987548041Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:40:26.992562 containerd[1577]: time="2025-10-29T00:40:26.991515291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:40:26.993618 containerd[1577]: time="2025-10-29T00:40:26.993544988Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.880491514s" Oct 29 00:40:26.993618 containerd[1577]: time="2025-10-29T00:40:26.993621167Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Oct 29 00:40:31.675160 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:40:31.675510 systemd[1]: kubelet.service: Consumed 271ms CPU time, 110.5M memory peak. Oct 29 00:40:31.679657 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:40:31.723112 systemd[1]: Reload requested from client PID 2277 ('systemctl') (unit session-7.scope)... Oct 29 00:40:31.723142 systemd[1]: Reloading... Oct 29 00:40:31.878922 zram_generator::config[2321]: No configuration found. Oct 29 00:40:32.257113 systemd[1]: Reloading finished in 533 ms. Oct 29 00:40:32.332667 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 29 00:40:32.332941 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 29 00:40:32.333338 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:40:32.333399 systemd[1]: kubelet.service: Consumed 149ms CPU time, 98.4M memory peak. Oct 29 00:40:32.335279 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:40:32.554254 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:40:32.566444 (kubelet)[2375]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 29 00:40:32.627878 kubelet[2375]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 00:40:32.628303 kubelet[2375]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 00:40:32.628522 kubelet[2375]: I1029 00:40:32.628479 2375 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 00:40:33.458050 kubelet[2375]: I1029 00:40:33.457986 2375 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 29 00:40:33.458050 kubelet[2375]: I1029 00:40:33.458031 2375 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 00:40:33.460656 kubelet[2375]: I1029 00:40:33.460604 2375 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 29 00:40:33.460770 kubelet[2375]: I1029 00:40:33.460665 2375 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 00:40:33.461086 kubelet[2375]: I1029 00:40:33.461040 2375 server.go:956] "Client rotation is on, will bootstrap in background" Oct 29 00:40:33.475910 kubelet[2375]: E1029 00:40:33.475454 2375 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://64.23.202.85:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.23.202.85:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 29 00:40:33.476098 kubelet[2375]: I1029 00:40:33.476043 2375 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 00:40:33.493331 kubelet[2375]: I1029 00:40:33.493260 2375 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 29 00:40:33.509743 kubelet[2375]: I1029 00:40:33.509263 2375 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 29 00:40:33.510258 kubelet[2375]: I1029 00:40:33.510217 2375 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 00:40:33.513013 kubelet[2375]: I1029 00:40:33.510320 2375 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.0-n-61970e6314","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 29 00:40:33.513268 kubelet[2375]: I1029 00:40:33.513253 2375 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 00:40:33.513333 kubelet[2375]: I1029 00:40:33.513326 2375 container_manager_linux.go:306] "Creating device plugin manager" Oct 29 00:40:33.513492 kubelet[2375]: I1029 00:40:33.513480 2375 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 29 00:40:33.515193 kubelet[2375]: I1029 00:40:33.515169 2375 state_mem.go:36] "Initialized new in-memory state store" Oct 29 00:40:33.515559 kubelet[2375]: I1029 00:40:33.515547 2375 kubelet.go:475] "Attempting to sync node with API server" Oct 29 00:40:33.515729 kubelet[2375]: I1029 00:40:33.515622 2375 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 00:40:33.515729 kubelet[2375]: I1029 00:40:33.515652 2375 kubelet.go:387] "Adding apiserver pod source" Oct 29 00:40:33.515729 kubelet[2375]: I1029 00:40:33.515666 2375 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 00:40:33.516170 kubelet[2375]: E1029 00:40:33.516138 2375 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://64.23.202.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.0-n-61970e6314&limit=500&resourceVersion=0\": dial tcp 64.23.202.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 29 00:40:33.519532 kubelet[2375]: E1029 00:40:33.519444 2375 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://64.23.202.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.202.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 29 00:40:33.521301 kubelet[2375]: I1029 00:40:33.521163 2375 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 29 00:40:33.525061 kubelet[2375]: I1029 00:40:33.525032 2375 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 29 00:40:33.525224 kubelet[2375]: I1029 00:40:33.525213 2375 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 29 00:40:33.525321 kubelet[2375]: W1029 00:40:33.525313 2375 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 29 00:40:33.529794 kubelet[2375]: I1029 00:40:33.529763 2375 server.go:1262] "Started kubelet" Oct 29 00:40:33.532260 kubelet[2375]: I1029 00:40:33.532090 2375 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 00:40:33.538068 kubelet[2375]: E1029 00:40:33.535165 2375 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.202.85:6443/api/v1/namespaces/default/events\": dial tcp 64.23.202.85:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4487.0.0-n-61970e6314.1872cf6433e681ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4487.0.0-n-61970e6314,UID:ci-4487.0.0-n-61970e6314,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4487.0.0-n-61970e6314,},FirstTimestamp:2025-10-29 00:40:33.529708974 +0000 UTC m=+0.957638900,LastTimestamp:2025-10-29 00:40:33.529708974 +0000 UTC m=+0.957638900,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487.0.0-n-61970e6314,}" Oct 29 00:40:33.538785 kubelet[2375]: I1029 00:40:33.538745 2375 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 00:40:33.548880 kubelet[2375]: I1029 00:40:33.546964 2375 server.go:310] "Adding debug handlers to kubelet server" Oct 29 00:40:33.550130 kubelet[2375]: I1029 00:40:33.550102 2375 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 29 00:40:33.550781 kubelet[2375]: E1029 00:40:33.550348 2375 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4487.0.0-n-61970e6314\" not found" Oct 29 00:40:33.553704 kubelet[2375]: I1029 00:40:33.553657 2375 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 00:40:33.554008 kubelet[2375]: I1029 00:40:33.553991 2375 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 29 00:40:33.554327 kubelet[2375]: I1029 00:40:33.554309 2375 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 00:40:33.555230 kubelet[2375]: I1029 00:40:33.555184 2375 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 29 00:40:33.555230 kubelet[2375]: I1029 00:40:33.555236 2375 reconciler.go:29] "Reconciler: start to sync state" Oct 29 00:40:33.555777 kubelet[2375]: I1029 00:40:33.555756 2375 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 00:40:33.558346 kubelet[2375]: E1029 00:40:33.558258 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.202.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-n-61970e6314?timeout=10s\": dial tcp 64.23.202.85:6443: connect: connection refused" interval="200ms" Oct 29 00:40:33.558878 kubelet[2375]: I1029 00:40:33.558711 2375 factory.go:223] Registration of the systemd container factory successfully Oct 29 00:40:33.558878 kubelet[2375]: I1029 00:40:33.558804 2375 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 00:40:33.561262 kubelet[2375]: E1029 00:40:33.560353 2375 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://64.23.202.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.202.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 29 00:40:33.564243 kubelet[2375]: I1029 00:40:33.563582 2375 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 29 00:40:33.565651 kubelet[2375]: I1029 00:40:33.565623 2375 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 29 00:40:33.565777 kubelet[2375]: I1029 00:40:33.565765 2375 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 29 00:40:33.565882 kubelet[2375]: I1029 00:40:33.565872 2375 kubelet.go:2427] "Starting kubelet main sync loop" Oct 29 00:40:33.566004 kubelet[2375]: E1029 00:40:33.565987 2375 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 00:40:33.568015 kubelet[2375]: I1029 00:40:33.567984 2375 factory.go:223] Registration of the containerd container factory successfully Oct 29 00:40:33.575766 kubelet[2375]: E1029 00:40:33.575733 2375 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://64.23.202.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.202.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 29 00:40:33.576001 kubelet[2375]: E1029 00:40:33.575976 2375 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 00:40:33.600008 kubelet[2375]: I1029 00:40:33.599975 2375 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 00:40:33.600008 kubelet[2375]: I1029 00:40:33.599993 2375 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 00:40:33.600008 kubelet[2375]: I1029 00:40:33.600014 2375 state_mem.go:36] "Initialized new in-memory state store" Oct 29 00:40:33.601847 kubelet[2375]: I1029 00:40:33.601774 2375 policy_none.go:49] "None policy: Start" Oct 29 00:40:33.601847 kubelet[2375]: I1029 00:40:33.601809 2375 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 29 00:40:33.602204 kubelet[2375]: I1029 00:40:33.601828 2375 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 29 00:40:33.603836 kubelet[2375]: I1029 00:40:33.603813 2375 policy_none.go:47] "Start" Oct 29 00:40:33.608389 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 29 00:40:33.627300 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 29 00:40:33.631773 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 29 00:40:33.644250 kubelet[2375]: E1029 00:40:33.644212 2375 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 29 00:40:33.644612 kubelet[2375]: I1029 00:40:33.644501 2375 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 00:40:33.644612 kubelet[2375]: I1029 00:40:33.644516 2375 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 00:40:33.645074 kubelet[2375]: I1029 00:40:33.645044 2375 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 00:40:33.648915 kubelet[2375]: E1029 00:40:33.648356 2375 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 00:40:33.649079 kubelet[2375]: E1029 00:40:33.649064 2375 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4487.0.0-n-61970e6314\" not found" Oct 29 00:40:33.681610 systemd[1]: Created slice kubepods-burstable-pod72a309c24a0f0322168a157f9db79db6.slice - libcontainer container kubepods-burstable-pod72a309c24a0f0322168a157f9db79db6.slice. Oct 29 00:40:33.701346 kubelet[2375]: E1029 00:40:33.701256 2375 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-61970e6314\" not found" node="ci-4487.0.0-n-61970e6314" Oct 29 00:40:33.704110 systemd[1]: Created slice kubepods-burstable-pod0a5ca2886ac2e7d05ff7258c62b960ee.slice - libcontainer container kubepods-burstable-pod0a5ca2886ac2e7d05ff7258c62b960ee.slice. Oct 29 00:40:33.717841 kubelet[2375]: E1029 00:40:33.717622 2375 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-61970e6314\" not found" node="ci-4487.0.0-n-61970e6314" Oct 29 00:40:33.723178 systemd[1]: Created slice kubepods-burstable-pod1d757140e7ff312a43fb4d0593e5ef8d.slice - libcontainer container kubepods-burstable-pod1d757140e7ff312a43fb4d0593e5ef8d.slice. Oct 29 00:40:33.725419 kubelet[2375]: E1029 00:40:33.725365 2375 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-61970e6314\" not found" node="ci-4487.0.0-n-61970e6314" Oct 29 00:40:33.747209 kubelet[2375]: I1029 00:40:33.746942 2375 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-61970e6314" Oct 29 00:40:33.747611 kubelet[2375]: E1029 00:40:33.747558 2375 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.202.85:6443/api/v1/nodes\": dial tcp 64.23.202.85:6443: connect: connection refused" node="ci-4487.0.0-n-61970e6314" Oct 29 00:40:33.759611 kubelet[2375]: E1029 00:40:33.759543 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.202.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-n-61970e6314?timeout=10s\": dial tcp 64.23.202.85:6443: connect: connection refused" interval="400ms" Oct 29 00:40:33.856235 kubelet[2375]: I1029 00:40:33.856165 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a5ca2886ac2e7d05ff7258c62b960ee-ca-certs\") pod \"kube-apiserver-ci-4487.0.0-n-61970e6314\" (UID: \"0a5ca2886ac2e7d05ff7258c62b960ee\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-61970e6314" Oct 29 00:40:33.856235 kubelet[2375]: I1029 00:40:33.856226 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a5ca2886ac2e7d05ff7258c62b960ee-k8s-certs\") pod \"kube-apiserver-ci-4487.0.0-n-61970e6314\" (UID: \"0a5ca2886ac2e7d05ff7258c62b960ee\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-61970e6314" Oct 29 00:40:33.856490 kubelet[2375]: I1029 00:40:33.856260 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/72a309c24a0f0322168a157f9db79db6-ca-certs\") pod \"kube-controller-manager-ci-4487.0.0-n-61970e6314\" (UID: \"72a309c24a0f0322168a157f9db79db6\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-61970e6314" Oct 29 00:40:33.856490 kubelet[2375]: I1029 00:40:33.856285 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/72a309c24a0f0322168a157f9db79db6-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.0-n-61970e6314\" (UID: \"72a309c24a0f0322168a157f9db79db6\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-61970e6314" Oct 29 00:40:33.856490 kubelet[2375]: I1029 00:40:33.856309 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a309c24a0f0322168a157f9db79db6-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.0-n-61970e6314\" (UID: \"72a309c24a0f0322168a157f9db79db6\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-61970e6314" Oct 29 00:40:33.856490 kubelet[2375]: I1029 00:40:33.856335 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1d757140e7ff312a43fb4d0593e5ef8d-kubeconfig\") pod \"kube-scheduler-ci-4487.0.0-n-61970e6314\" (UID: \"1d757140e7ff312a43fb4d0593e5ef8d\") " pod="kube-system/kube-scheduler-ci-4487.0.0-n-61970e6314" Oct 29 00:40:33.856490 kubelet[2375]: I1029 00:40:33.856363 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a5ca2886ac2e7d05ff7258c62b960ee-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.0-n-61970e6314\" (UID: \"0a5ca2886ac2e7d05ff7258c62b960ee\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-61970e6314" Oct 29 00:40:33.856652 kubelet[2375]: I1029 00:40:33.856387 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/72a309c24a0f0322168a157f9db79db6-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.0-n-61970e6314\" (UID: \"72a309c24a0f0322168a157f9db79db6\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-61970e6314" Oct 29 00:40:33.856652 kubelet[2375]: I1029 00:40:33.856440 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/72a309c24a0f0322168a157f9db79db6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.0-n-61970e6314\" (UID: \"72a309c24a0f0322168a157f9db79db6\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-61970e6314" Oct 29 00:40:33.949424 kubelet[2375]: I1029 00:40:33.949379 2375 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-61970e6314" Oct 29 00:40:33.949956 kubelet[2375]: E1029 00:40:33.949910 2375 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.202.85:6443/api/v1/nodes\": dial tcp 64.23.202.85:6443: connect: connection refused" node="ci-4487.0.0-n-61970e6314" Oct 29 00:40:34.004289 kubelet[2375]: E1029 00:40:34.004115 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:34.005139 containerd[1577]: time="2025-10-29T00:40:34.005018012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.0-n-61970e6314,Uid:72a309c24a0f0322168a157f9db79db6,Namespace:kube-system,Attempt:0,}" Oct 29 00:40:34.010662 systemd-resolved[1279]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Oct 29 00:40:34.019612 kubelet[2375]: E1029 00:40:34.019554 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:34.025870 containerd[1577]: time="2025-10-29T00:40:34.025801888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.0-n-61970e6314,Uid:0a5ca2886ac2e7d05ff7258c62b960ee,Namespace:kube-system,Attempt:0,}" Oct 29 00:40:34.027695 kubelet[2375]: E1029 00:40:34.027667 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:34.029017 containerd[1577]: time="2025-10-29T00:40:34.028388177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.0-n-61970e6314,Uid:1d757140e7ff312a43fb4d0593e5ef8d,Namespace:kube-system,Attempt:0,}" Oct 29 00:40:34.160590 kubelet[2375]: E1029 00:40:34.160549 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.202.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-n-61970e6314?timeout=10s\": dial tcp 64.23.202.85:6443: connect: connection refused" interval="800ms" Oct 29 00:40:34.351479 kubelet[2375]: I1029 00:40:34.351266 2375 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-61970e6314" Oct 29 00:40:34.352413 kubelet[2375]: E1029 00:40:34.352374 2375 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.202.85:6443/api/v1/nodes\": dial tcp 64.23.202.85:6443: connect: connection refused" node="ci-4487.0.0-n-61970e6314" Oct 29 00:40:34.530257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3009037382.mount: Deactivated successfully. Oct 29 00:40:34.533983 containerd[1577]: time="2025-10-29T00:40:34.533913120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 00:40:34.534921 containerd[1577]: time="2025-10-29T00:40:34.534889230Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 29 00:40:34.536371 containerd[1577]: time="2025-10-29T00:40:34.536328906Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 00:40:34.539048 containerd[1577]: time="2025-10-29T00:40:34.539003560Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 00:40:34.540879 containerd[1577]: time="2025-10-29T00:40:34.539273816Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 29 00:40:34.540879 containerd[1577]: time="2025-10-29T00:40:34.540078621Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 29 00:40:34.540879 containerd[1577]: time="2025-10-29T00:40:34.540475356Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 00:40:34.542575 containerd[1577]: time="2025-10-29T00:40:34.542524525Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 512.327001ms" Oct 29 00:40:34.544429 containerd[1577]: time="2025-10-29T00:40:34.544393134Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 534.896062ms" Oct 29 00:40:34.545931 containerd[1577]: time="2025-10-29T00:40:34.545898010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 00:40:34.553427 containerd[1577]: time="2025-10-29T00:40:34.553374797Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 525.73021ms" Oct 29 00:40:34.684508 kubelet[2375]: E1029 00:40:34.683670 2375 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://64.23.202.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.202.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 29 00:40:34.692769 kubelet[2375]: E1029 00:40:34.692724 2375 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://64.23.202.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.202.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 29 00:40:34.705999 containerd[1577]: time="2025-10-29T00:40:34.705931616Z" level=info msg="connecting to shim ab20d96ea88e98f073c938bd1168686fdf63d0193f3384587310270efbec58ca" address="unix:///run/containerd/s/212eefd5a6c650fdb291318e44d4c9e9d11b06dad3db42880557955422cb0e04" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:40:34.707875 containerd[1577]: time="2025-10-29T00:40:34.707756893Z" level=info msg="connecting to shim 6c3f6a5fc67c804747db2ca6fed9abaf55e2e7069c7c06a86076f4bb5b20fc00" address="unix:///run/containerd/s/2e23d2d111def102d72c285e23c163bf152d771e52a1b51c693afa44420b3672" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:40:34.712401 containerd[1577]: time="2025-10-29T00:40:34.712361279Z" level=info msg="connecting to shim fda669c69ef2cda335271e7663885a68f137d77709aacb9aa0cb4e55edfd68d0" address="unix:///run/containerd/s/f5ff43e8f8c63c4b674db7cbb580f9b37651b951d76e347bf3d0cacd27067ec4" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:40:34.811112 systemd[1]: Started cri-containerd-6c3f6a5fc67c804747db2ca6fed9abaf55e2e7069c7c06a86076f4bb5b20fc00.scope - libcontainer container 6c3f6a5fc67c804747db2ca6fed9abaf55e2e7069c7c06a86076f4bb5b20fc00. Oct 29 00:40:34.813473 systemd[1]: Started cri-containerd-ab20d96ea88e98f073c938bd1168686fdf63d0193f3384587310270efbec58ca.scope - libcontainer container ab20d96ea88e98f073c938bd1168686fdf63d0193f3384587310270efbec58ca. Oct 29 00:40:34.815881 systemd[1]: Started cri-containerd-fda669c69ef2cda335271e7663885a68f137d77709aacb9aa0cb4e55edfd68d0.scope - libcontainer container fda669c69ef2cda335271e7663885a68f137d77709aacb9aa0cb4e55edfd68d0. Oct 29 00:40:34.896474 containerd[1577]: time="2025-10-29T00:40:34.896430506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.0-n-61970e6314,Uid:72a309c24a0f0322168a157f9db79db6,Namespace:kube-system,Attempt:0,} returns sandbox id \"fda669c69ef2cda335271e7663885a68f137d77709aacb9aa0cb4e55edfd68d0\"" Oct 29 00:40:34.900995 kubelet[2375]: E1029 00:40:34.900880 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:34.909463 containerd[1577]: time="2025-10-29T00:40:34.909418668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.0-n-61970e6314,Uid:0a5ca2886ac2e7d05ff7258c62b960ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab20d96ea88e98f073c938bd1168686fdf63d0193f3384587310270efbec58ca\"" Oct 29 00:40:34.910651 containerd[1577]: time="2025-10-29T00:40:34.910220005Z" level=info msg="CreateContainer within sandbox \"fda669c69ef2cda335271e7663885a68f137d77709aacb9aa0cb4e55edfd68d0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 29 00:40:34.911876 kubelet[2375]: E1029 00:40:34.911706 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:34.918918 containerd[1577]: time="2025-10-29T00:40:34.918631369Z" level=info msg="CreateContainer within sandbox \"ab20d96ea88e98f073c938bd1168686fdf63d0193f3384587310270efbec58ca\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 29 00:40:34.930311 containerd[1577]: time="2025-10-29T00:40:34.930162402Z" level=info msg="Container 1ef72b4b58c2a2312e72b2c907d10b2352efc9cca663dea6b0b8165eedba19ae: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:40:34.944633 containerd[1577]: time="2025-10-29T00:40:34.940934000Z" level=info msg="Container 9f65775ebde836e566828e5b95b3d25a258955324a8c642dc333d25ff7fee80a: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:40:34.953008 containerd[1577]: time="2025-10-29T00:40:34.952962969Z" level=info msg="CreateContainer within sandbox \"fda669c69ef2cda335271e7663885a68f137d77709aacb9aa0cb4e55edfd68d0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9f65775ebde836e566828e5b95b3d25a258955324a8c642dc333d25ff7fee80a\"" Oct 29 00:40:34.953921 containerd[1577]: time="2025-10-29T00:40:34.953880235Z" level=info msg="StartContainer for \"9f65775ebde836e566828e5b95b3d25a258955324a8c642dc333d25ff7fee80a\"" Oct 29 00:40:34.956215 containerd[1577]: time="2025-10-29T00:40:34.956172282Z" level=info msg="CreateContainer within sandbox \"ab20d96ea88e98f073c938bd1168686fdf63d0193f3384587310270efbec58ca\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1ef72b4b58c2a2312e72b2c907d10b2352efc9cca663dea6b0b8165eedba19ae\"" Oct 29 00:40:34.957458 containerd[1577]: time="2025-10-29T00:40:34.957421515Z" level=info msg="connecting to shim 9f65775ebde836e566828e5b95b3d25a258955324a8c642dc333d25ff7fee80a" address="unix:///run/containerd/s/f5ff43e8f8c63c4b674db7cbb580f9b37651b951d76e347bf3d0cacd27067ec4" protocol=ttrpc version=3 Oct 29 00:40:34.958525 containerd[1577]: time="2025-10-29T00:40:34.958494927Z" level=info msg="StartContainer for \"1ef72b4b58c2a2312e72b2c907d10b2352efc9cca663dea6b0b8165eedba19ae\"" Oct 29 00:40:34.960451 containerd[1577]: time="2025-10-29T00:40:34.960417112Z" level=info msg="connecting to shim 1ef72b4b58c2a2312e72b2c907d10b2352efc9cca663dea6b0b8165eedba19ae" address="unix:///run/containerd/s/212eefd5a6c650fdb291318e44d4c9e9d11b06dad3db42880557955422cb0e04" protocol=ttrpc version=3 Oct 29 00:40:34.960926 containerd[1577]: time="2025-10-29T00:40:34.960889569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.0-n-61970e6314,Uid:1d757140e7ff312a43fb4d0593e5ef8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c3f6a5fc67c804747db2ca6fed9abaf55e2e7069c7c06a86076f4bb5b20fc00\"" Oct 29 00:40:34.962009 kubelet[2375]: E1029 00:40:34.961726 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.202.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-n-61970e6314?timeout=10s\": dial tcp 64.23.202.85:6443: connect: connection refused" interval="1.6s" Oct 29 00:40:34.962815 kubelet[2375]: E1029 00:40:34.962522 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:34.968229 containerd[1577]: time="2025-10-29T00:40:34.968173231Z" level=info msg="CreateContainer within sandbox \"6c3f6a5fc67c804747db2ca6fed9abaf55e2e7069c7c06a86076f4bb5b20fc00\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 29 00:40:34.979888 containerd[1577]: time="2025-10-29T00:40:34.979282324Z" level=info msg="Container e821f6ede3ce87ff92dc626d41e4b10b472b44535bfcf99792d0c783d03152eb: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:40:34.994098 systemd[1]: Started cri-containerd-1ef72b4b58c2a2312e72b2c907d10b2352efc9cca663dea6b0b8165eedba19ae.scope - libcontainer container 1ef72b4b58c2a2312e72b2c907d10b2352efc9cca663dea6b0b8165eedba19ae. Oct 29 00:40:34.997296 containerd[1577]: time="2025-10-29T00:40:34.997102490Z" level=info msg="CreateContainer within sandbox \"6c3f6a5fc67c804747db2ca6fed9abaf55e2e7069c7c06a86076f4bb5b20fc00\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e821f6ede3ce87ff92dc626d41e4b10b472b44535bfcf99792d0c783d03152eb\"" Oct 29 00:40:35.000557 containerd[1577]: time="2025-10-29T00:40:35.000490953Z" level=info msg="StartContainer for \"e821f6ede3ce87ff92dc626d41e4b10b472b44535bfcf99792d0c783d03152eb\"" Oct 29 00:40:35.002503 containerd[1577]: time="2025-10-29T00:40:35.002471440Z" level=info msg="connecting to shim e821f6ede3ce87ff92dc626d41e4b10b472b44535bfcf99792d0c783d03152eb" address="unix:///run/containerd/s/2e23d2d111def102d72c285e23c163bf152d771e52a1b51c693afa44420b3672" protocol=ttrpc version=3 Oct 29 00:40:35.005200 systemd[1]: Started cri-containerd-9f65775ebde836e566828e5b95b3d25a258955324a8c642dc333d25ff7fee80a.scope - libcontainer container 9f65775ebde836e566828e5b95b3d25a258955324a8c642dc333d25ff7fee80a. Oct 29 00:40:35.046071 systemd[1]: Started cri-containerd-e821f6ede3ce87ff92dc626d41e4b10b472b44535bfcf99792d0c783d03152eb.scope - libcontainer container e821f6ede3ce87ff92dc626d41e4b10b472b44535bfcf99792d0c783d03152eb. Oct 29 00:40:35.055779 kubelet[2375]: E1029 00:40:35.055730 2375 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://64.23.202.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.202.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 29 00:40:35.087513 kubelet[2375]: E1029 00:40:35.087470 2375 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://64.23.202.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.0-n-61970e6314&limit=500&resourceVersion=0\": dial tcp 64.23.202.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 29 00:40:35.103003 containerd[1577]: time="2025-10-29T00:40:35.102959742Z" level=info msg="StartContainer for \"1ef72b4b58c2a2312e72b2c907d10b2352efc9cca663dea6b0b8165eedba19ae\" returns successfully" Oct 29 00:40:35.148932 containerd[1577]: time="2025-10-29T00:40:35.148249147Z" level=info msg="StartContainer for \"9f65775ebde836e566828e5b95b3d25a258955324a8c642dc333d25ff7fee80a\" returns successfully" Oct 29 00:40:35.156018 kubelet[2375]: I1029 00:40:35.154346 2375 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-61970e6314" Oct 29 00:40:35.156695 kubelet[2375]: E1029 00:40:35.156655 2375 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.202.85:6443/api/v1/nodes\": dial tcp 64.23.202.85:6443: connect: connection refused" node="ci-4487.0.0-n-61970e6314" Oct 29 00:40:35.171831 containerd[1577]: time="2025-10-29T00:40:35.171781211Z" level=info msg="StartContainer for \"e821f6ede3ce87ff92dc626d41e4b10b472b44535bfcf99792d0c783d03152eb\" returns successfully" Oct 29 00:40:35.619873 kubelet[2375]: E1029 00:40:35.619822 2375 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-61970e6314\" not found" node="ci-4487.0.0-n-61970e6314" Oct 29 00:40:35.620240 kubelet[2375]: E1029 00:40:35.620189 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:35.636287 kubelet[2375]: E1029 00:40:35.635810 2375 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-61970e6314\" not found" node="ci-4487.0.0-n-61970e6314" Oct 29 00:40:35.637420 kubelet[2375]: E1029 00:40:35.637081 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:35.639196 kubelet[2375]: E1029 00:40:35.639165 2375 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-61970e6314\" not found" node="ci-4487.0.0-n-61970e6314" Oct 29 00:40:35.639474 kubelet[2375]: E1029 00:40:35.639454 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:36.641890 kubelet[2375]: E1029 00:40:36.641586 2375 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-61970e6314\" not found" node="ci-4487.0.0-n-61970e6314" Oct 29 00:40:36.641890 kubelet[2375]: E1029 00:40:36.641726 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:36.641890 kubelet[2375]: E1029 00:40:36.641830 2375 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-61970e6314\" not found" node="ci-4487.0.0-n-61970e6314" Oct 29 00:40:36.642847 kubelet[2375]: E1029 00:40:36.642652 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:36.759888 kubelet[2375]: I1029 00:40:36.758016 2375 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-61970e6314" Oct 29 00:40:37.283202 kubelet[2375]: E1029 00:40:37.283154 2375 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4487.0.0-n-61970e6314\" not found" node="ci-4487.0.0-n-61970e6314" Oct 29 00:40:37.370088 kubelet[2375]: I1029 00:40:37.369767 2375 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.0-n-61970e6314" Oct 29 00:40:37.370088 kubelet[2375]: E1029 00:40:37.369819 2375 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4487.0.0-n-61970e6314\": node \"ci-4487.0.0-n-61970e6314\" not found" Oct 29 00:40:37.452685 kubelet[2375]: I1029 00:40:37.452630 2375 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.0-n-61970e6314" Oct 29 00:40:37.459660 kubelet[2375]: E1029 00:40:37.459557 2375 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487.0.0-n-61970e6314\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4487.0.0-n-61970e6314" Oct 29 00:40:37.459660 kubelet[2375]: I1029 00:40:37.459607 2375 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.0-n-61970e6314" Oct 29 00:40:37.462222 kubelet[2375]: E1029 00:40:37.461983 2375 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.0-n-61970e6314\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4487.0.0-n-61970e6314" Oct 29 00:40:37.462222 kubelet[2375]: I1029 00:40:37.462035 2375 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.0-n-61970e6314" Oct 29 00:40:37.464445 kubelet[2375]: E1029 00:40:37.464410 2375 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4487.0.0-n-61970e6314\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4487.0.0-n-61970e6314" Oct 29 00:40:37.525109 kubelet[2375]: I1029 00:40:37.525024 2375 apiserver.go:52] "Watching apiserver" Oct 29 00:40:37.556194 kubelet[2375]: I1029 00:40:37.555756 2375 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 29 00:40:37.611883 kubelet[2375]: I1029 00:40:37.611737 2375 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.0-n-61970e6314" Oct 29 00:40:37.614732 kubelet[2375]: E1029 00:40:37.614670 2375 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4487.0.0-n-61970e6314\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4487.0.0-n-61970e6314" Oct 29 00:40:37.615006 kubelet[2375]: E1029 00:40:37.614959 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:39.562874 systemd[1]: Reload requested from client PID 2659 ('systemctl') (unit session-7.scope)... Oct 29 00:40:39.563289 systemd[1]: Reloading... Oct 29 00:40:39.666885 zram_generator::config[2703]: No configuration found. Oct 29 00:40:39.939309 systemd[1]: Reloading finished in 375 ms. Oct 29 00:40:39.976759 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:40:39.989389 systemd[1]: kubelet.service: Deactivated successfully. Oct 29 00:40:39.989640 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:40:39.989702 systemd[1]: kubelet.service: Consumed 1.421s CPU time, 122.5M memory peak. Oct 29 00:40:39.993131 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:40:40.168176 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:40:40.180991 (kubelet)[2754]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 29 00:40:40.253621 kubelet[2754]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 00:40:40.253621 kubelet[2754]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 00:40:40.253621 kubelet[2754]: I1029 00:40:40.253580 2754 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 00:40:40.269240 kubelet[2754]: I1029 00:40:40.269187 2754 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 29 00:40:40.269619 kubelet[2754]: I1029 00:40:40.269581 2754 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 00:40:40.269769 kubelet[2754]: I1029 00:40:40.269751 2754 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 29 00:40:40.269824 kubelet[2754]: I1029 00:40:40.269814 2754 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 00:40:40.270192 kubelet[2754]: I1029 00:40:40.270160 2754 server.go:956] "Client rotation is on, will bootstrap in background" Oct 29 00:40:40.271743 kubelet[2754]: I1029 00:40:40.271720 2754 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 29 00:40:40.274719 kubelet[2754]: I1029 00:40:40.274673 2754 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 00:40:40.281515 kubelet[2754]: I1029 00:40:40.281485 2754 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 29 00:40:40.286386 kubelet[2754]: I1029 00:40:40.285090 2754 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 29 00:40:40.286386 kubelet[2754]: I1029 00:40:40.285364 2754 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 00:40:40.286386 kubelet[2754]: I1029 00:40:40.285391 2754 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.0-n-61970e6314","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 29 00:40:40.286386 kubelet[2754]: I1029 00:40:40.285575 2754 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 00:40:40.286659 kubelet[2754]: I1029 00:40:40.285585 2754 container_manager_linux.go:306] "Creating device plugin manager" Oct 29 00:40:40.286659 kubelet[2754]: I1029 00:40:40.285617 2754 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 29 00:40:40.286843 kubelet[2754]: I1029 00:40:40.286825 2754 state_mem.go:36] "Initialized new in-memory state store" Oct 29 00:40:40.287104 kubelet[2754]: I1029 00:40:40.287092 2754 kubelet.go:475] "Attempting to sync node with API server" Oct 29 00:40:40.287180 kubelet[2754]: I1029 00:40:40.287170 2754 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 00:40:40.287242 kubelet[2754]: I1029 00:40:40.287235 2754 kubelet.go:387] "Adding apiserver pod source" Oct 29 00:40:40.287296 kubelet[2754]: I1029 00:40:40.287290 2754 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 00:40:40.291342 kubelet[2754]: I1029 00:40:40.291013 2754 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 29 00:40:40.293558 kubelet[2754]: I1029 00:40:40.292751 2754 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 29 00:40:40.293558 kubelet[2754]: I1029 00:40:40.292791 2754 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 29 00:40:40.296601 kubelet[2754]: I1029 00:40:40.296577 2754 server.go:1262] "Started kubelet" Oct 29 00:40:40.300319 kubelet[2754]: I1029 00:40:40.300251 2754 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 00:40:40.303608 kubelet[2754]: I1029 00:40:40.303576 2754 server.go:310] "Adding debug handlers to kubelet server" Oct 29 00:40:40.311948 kubelet[2754]: I1029 00:40:40.311180 2754 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 00:40:40.311948 kubelet[2754]: I1029 00:40:40.311253 2754 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 29 00:40:40.311948 kubelet[2754]: I1029 00:40:40.311469 2754 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 00:40:40.312179 kubelet[2754]: I1029 00:40:40.311969 2754 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 00:40:40.315420 kubelet[2754]: I1029 00:40:40.315389 2754 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 00:40:40.317084 kubelet[2754]: I1029 00:40:40.317062 2754 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 29 00:40:40.317331 kubelet[2754]: E1029 00:40:40.317312 2754 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4487.0.0-n-61970e6314\" not found" Oct 29 00:40:40.318178 kubelet[2754]: I1029 00:40:40.317934 2754 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 29 00:40:40.318178 kubelet[2754]: I1029 00:40:40.318062 2754 reconciler.go:29] "Reconciler: start to sync state" Oct 29 00:40:40.329005 kubelet[2754]: I1029 00:40:40.328494 2754 factory.go:223] Registration of the systemd container factory successfully Oct 29 00:40:40.329005 kubelet[2754]: I1029 00:40:40.328587 2754 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 00:40:40.331281 kubelet[2754]: I1029 00:40:40.331157 2754 factory.go:223] Registration of the containerd container factory successfully Oct 29 00:40:40.362269 kubelet[2754]: I1029 00:40:40.362096 2754 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 29 00:40:40.365729 kubelet[2754]: I1029 00:40:40.365685 2754 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 29 00:40:40.365916 kubelet[2754]: I1029 00:40:40.365906 2754 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 29 00:40:40.366006 kubelet[2754]: I1029 00:40:40.365986 2754 kubelet.go:2427] "Starting kubelet main sync loop" Oct 29 00:40:40.366134 kubelet[2754]: E1029 00:40:40.366119 2754 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 00:40:40.404880 kubelet[2754]: I1029 00:40:40.404539 2754 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 00:40:40.404880 kubelet[2754]: I1029 00:40:40.404558 2754 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 00:40:40.404880 kubelet[2754]: I1029 00:40:40.404583 2754 state_mem.go:36] "Initialized new in-memory state store" Oct 29 00:40:40.404880 kubelet[2754]: I1029 00:40:40.404735 2754 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 29 00:40:40.404880 kubelet[2754]: I1029 00:40:40.404745 2754 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 29 00:40:40.404880 kubelet[2754]: I1029 00:40:40.404762 2754 policy_none.go:49] "None policy: Start" Oct 29 00:40:40.404880 kubelet[2754]: I1029 00:40:40.404771 2754 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 29 00:40:40.404880 kubelet[2754]: I1029 00:40:40.404780 2754 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 29 00:40:40.405879 kubelet[2754]: I1029 00:40:40.405477 2754 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 29 00:40:40.405988 kubelet[2754]: I1029 00:40:40.405976 2754 policy_none.go:47] "Start" Oct 29 00:40:40.412218 kubelet[2754]: E1029 00:40:40.412136 2754 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 29 00:40:40.412456 kubelet[2754]: I1029 00:40:40.412326 2754 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 00:40:40.412456 kubelet[2754]: I1029 00:40:40.412337 2754 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 00:40:40.412814 kubelet[2754]: I1029 00:40:40.412737 2754 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 00:40:40.414258 kubelet[2754]: E1029 00:40:40.414239 2754 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 00:40:40.468185 kubelet[2754]: I1029 00:40:40.468094 2754 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.0-n-61970e6314" Oct 29 00:40:40.468581 kubelet[2754]: I1029 00:40:40.468319 2754 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.0-n-61970e6314" Oct 29 00:40:40.468581 kubelet[2754]: I1029 00:40:40.468501 2754 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.0-n-61970e6314" Oct 29 00:40:40.474432 kubelet[2754]: I1029 00:40:40.473438 2754 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 29 00:40:40.474432 kubelet[2754]: I1029 00:40:40.474432 2754 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 29 00:40:40.475203 kubelet[2754]: I1029 00:40:40.475174 2754 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 29 00:40:40.515655 kubelet[2754]: I1029 00:40:40.515017 2754 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-61970e6314" Oct 29 00:40:40.528764 kubelet[2754]: I1029 00:40:40.528594 2754 kubelet_node_status.go:124] "Node was previously registered" node="ci-4487.0.0-n-61970e6314" Oct 29 00:40:40.529352 kubelet[2754]: I1029 00:40:40.529249 2754 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.0-n-61970e6314" Oct 29 00:40:40.619816 kubelet[2754]: I1029 00:40:40.619772 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a5ca2886ac2e7d05ff7258c62b960ee-ca-certs\") pod \"kube-apiserver-ci-4487.0.0-n-61970e6314\" (UID: \"0a5ca2886ac2e7d05ff7258c62b960ee\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-61970e6314" Oct 29 00:40:40.619816 kubelet[2754]: I1029 00:40:40.619815 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a5ca2886ac2e7d05ff7258c62b960ee-k8s-certs\") pod \"kube-apiserver-ci-4487.0.0-n-61970e6314\" (UID: \"0a5ca2886ac2e7d05ff7258c62b960ee\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-61970e6314" Oct 29 00:40:40.619816 kubelet[2754]: I1029 00:40:40.619835 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/72a309c24a0f0322168a157f9db79db6-ca-certs\") pod \"kube-controller-manager-ci-4487.0.0-n-61970e6314\" (UID: \"72a309c24a0f0322168a157f9db79db6\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-61970e6314" Oct 29 00:40:40.619816 kubelet[2754]: I1029 00:40:40.619879 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/72a309c24a0f0322168a157f9db79db6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.0-n-61970e6314\" (UID: \"72a309c24a0f0322168a157f9db79db6\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-61970e6314" Oct 29 00:40:40.619816 kubelet[2754]: I1029 00:40:40.619901 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1d757140e7ff312a43fb4d0593e5ef8d-kubeconfig\") pod \"kube-scheduler-ci-4487.0.0-n-61970e6314\" (UID: \"1d757140e7ff312a43fb4d0593e5ef8d\") " pod="kube-system/kube-scheduler-ci-4487.0.0-n-61970e6314" Oct 29 00:40:40.620587 kubelet[2754]: I1029 00:40:40.619919 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a5ca2886ac2e7d05ff7258c62b960ee-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.0-n-61970e6314\" (UID: \"0a5ca2886ac2e7d05ff7258c62b960ee\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-61970e6314" Oct 29 00:40:40.620587 kubelet[2754]: I1029 00:40:40.619935 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/72a309c24a0f0322168a157f9db79db6-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.0-n-61970e6314\" (UID: \"72a309c24a0f0322168a157f9db79db6\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-61970e6314" Oct 29 00:40:40.620587 kubelet[2754]: I1029 00:40:40.619950 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/72a309c24a0f0322168a157f9db79db6-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.0-n-61970e6314\" (UID: \"72a309c24a0f0322168a157f9db79db6\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-61970e6314" Oct 29 00:40:40.620587 kubelet[2754]: I1029 00:40:40.619964 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a309c24a0f0322168a157f9db79db6-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.0-n-61970e6314\" (UID: \"72a309c24a0f0322168a157f9db79db6\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-61970e6314" Oct 29 00:40:40.775427 kubelet[2754]: E1029 00:40:40.774987 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:40.775427 kubelet[2754]: E1029 00:40:40.775359 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:40.776940 kubelet[2754]: E1029 00:40:40.776913 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:41.288507 kubelet[2754]: I1029 00:40:41.288461 2754 apiserver.go:52] "Watching apiserver" Oct 29 00:40:41.318607 kubelet[2754]: I1029 00:40:41.318536 2754 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 29 00:40:41.396794 kubelet[2754]: E1029 00:40:41.394928 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:41.400498 kubelet[2754]: I1029 00:40:41.399142 2754 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.0-n-61970e6314" Oct 29 00:40:41.400498 kubelet[2754]: E1029 00:40:41.399449 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:41.414059 kubelet[2754]: I1029 00:40:41.413603 2754 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 29 00:40:41.414059 kubelet[2754]: E1029 00:40:41.413701 2754 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4487.0.0-n-61970e6314\" already exists" pod="kube-system/kube-controller-manager-ci-4487.0.0-n-61970e6314" Oct 29 00:40:41.414059 kubelet[2754]: E1029 00:40:41.413983 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:41.447457 kubelet[2754]: I1029 00:40:41.447215 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4487.0.0-n-61970e6314" podStartSLOduration=1.44719669 podStartE2EDuration="1.44719669s" podCreationTimestamp="2025-10-29 00:40:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:40:41.44593362 +0000 UTC m=+1.257046651" watchObservedRunningTime="2025-10-29 00:40:41.44719669 +0000 UTC m=+1.258309713" Oct 29 00:40:41.448220 kubelet[2754]: I1029 00:40:41.447715 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4487.0.0-n-61970e6314" podStartSLOduration=1.447704675 podStartE2EDuration="1.447704675s" podCreationTimestamp="2025-10-29 00:40:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:40:41.43459178 +0000 UTC m=+1.245704811" watchObservedRunningTime="2025-10-29 00:40:41.447704675 +0000 UTC m=+1.258817701" Oct 29 00:40:41.458708 kubelet[2754]: I1029 00:40:41.458483 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4487.0.0-n-61970e6314" podStartSLOduration=1.458459766 podStartE2EDuration="1.458459766s" podCreationTimestamp="2025-10-29 00:40:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:40:41.458451471 +0000 UTC m=+1.269564502" watchObservedRunningTime="2025-10-29 00:40:41.458459766 +0000 UTC m=+1.269572800" Oct 29 00:40:42.396569 kubelet[2754]: E1029 00:40:42.396529 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:42.398433 kubelet[2754]: E1029 00:40:42.398008 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:42.398433 kubelet[2754]: E1029 00:40:42.398313 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:44.926683 kubelet[2754]: E1029 00:40:44.926589 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:45.137799 kubelet[2754]: E1029 00:40:45.137756 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:45.402312 kubelet[2754]: E1029 00:40:45.402013 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:46.404134 systemd-timesyncd[1454]: Contacted time server 216.144.228.179:123 (2.flatcar.pool.ntp.org). Oct 29 00:40:46.404323 systemd-timesyncd[1454]: Initial clock synchronization to Wed 2025-10-29 00:40:46.403877 UTC. Oct 29 00:40:46.405081 systemd-resolved[1279]: Clock change detected. Flushing caches. Oct 29 00:40:46.933992 kubelet[2754]: I1029 00:40:46.933934 2754 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 29 00:40:46.935218 kubelet[2754]: I1029 00:40:46.934766 2754 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 29 00:40:46.935260 containerd[1577]: time="2025-10-29T00:40:46.934331641Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 29 00:40:47.886704 systemd[1]: Created slice kubepods-besteffort-pod50089223_5f03_4bf7_a768_c4cf7f7fd3a7.slice - libcontainer container kubepods-besteffort-pod50089223_5f03_4bf7_a768_c4cf7f7fd3a7.slice. Oct 29 00:40:47.917259 kubelet[2754]: I1029 00:40:47.916675 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/50089223-5f03-4bf7-a768-c4cf7f7fd3a7-kube-proxy\") pod \"kube-proxy-sldq2\" (UID: \"50089223-5f03-4bf7-a768-c4cf7f7fd3a7\") " pod="kube-system/kube-proxy-sldq2" Oct 29 00:40:47.917259 kubelet[2754]: I1029 00:40:47.916734 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50089223-5f03-4bf7-a768-c4cf7f7fd3a7-xtables-lock\") pod \"kube-proxy-sldq2\" (UID: \"50089223-5f03-4bf7-a768-c4cf7f7fd3a7\") " pod="kube-system/kube-proxy-sldq2" Oct 29 00:40:47.917259 kubelet[2754]: I1029 00:40:47.916767 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnktl\" (UniqueName: \"kubernetes.io/projected/50089223-5f03-4bf7-a768-c4cf7f7fd3a7-kube-api-access-vnktl\") pod \"kube-proxy-sldq2\" (UID: \"50089223-5f03-4bf7-a768-c4cf7f7fd3a7\") " pod="kube-system/kube-proxy-sldq2" Oct 29 00:40:47.917259 kubelet[2754]: I1029 00:40:47.916797 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50089223-5f03-4bf7-a768-c4cf7f7fd3a7-lib-modules\") pod \"kube-proxy-sldq2\" (UID: \"50089223-5f03-4bf7-a768-c4cf7f7fd3a7\") " pod="kube-system/kube-proxy-sldq2" Oct 29 00:40:48.152932 systemd[1]: Created slice kubepods-besteffort-pod2b0f3f5e_159e_4b06_b5f4_b7088fcd20c5.slice - libcontainer container kubepods-besteffort-pod2b0f3f5e_159e_4b06_b5f4_b7088fcd20c5.slice. Oct 29 00:40:48.202264 kubelet[2754]: E1029 00:40:48.202176 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:48.203940 containerd[1577]: time="2025-10-29T00:40:48.203820506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sldq2,Uid:50089223-5f03-4bf7-a768-c4cf7f7fd3a7,Namespace:kube-system,Attempt:0,}" Oct 29 00:40:48.236321 containerd[1577]: time="2025-10-29T00:40:48.235833369Z" level=info msg="connecting to shim 09a968eb8435cdf5fe33144e35cc1480d71931dd69c0b234b9fe208b677d247d" address="unix:///run/containerd/s/8fe940a5d77716641e1c46b993a551630305ad89731986052bb9e629984abc3d" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:40:48.286715 systemd[1]: Started cri-containerd-09a968eb8435cdf5fe33144e35cc1480d71931dd69c0b234b9fe208b677d247d.scope - libcontainer container 09a968eb8435cdf5fe33144e35cc1480d71931dd69c0b234b9fe208b677d247d. Oct 29 00:40:48.320632 kubelet[2754]: I1029 00:40:48.320571 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2b0f3f5e-159e-4b06-b5f4-b7088fcd20c5-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-s95wl\" (UID: \"2b0f3f5e-159e-4b06-b5f4-b7088fcd20c5\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-s95wl" Oct 29 00:40:48.320632 kubelet[2754]: I1029 00:40:48.320629 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc2hv\" (UniqueName: \"kubernetes.io/projected/2b0f3f5e-159e-4b06-b5f4-b7088fcd20c5-kube-api-access-pc2hv\") pod \"tigera-operator-65cdcdfd6d-s95wl\" (UID: \"2b0f3f5e-159e-4b06-b5f4-b7088fcd20c5\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-s95wl" Oct 29 00:40:48.335698 containerd[1577]: time="2025-10-29T00:40:48.335633006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sldq2,Uid:50089223-5f03-4bf7-a768-c4cf7f7fd3a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"09a968eb8435cdf5fe33144e35cc1480d71931dd69c0b234b9fe208b677d247d\"" Oct 29 00:40:48.337487 kubelet[2754]: E1029 00:40:48.337446 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:48.346578 containerd[1577]: time="2025-10-29T00:40:48.346237752Z" level=info msg="CreateContainer within sandbox \"09a968eb8435cdf5fe33144e35cc1480d71931dd69c0b234b9fe208b677d247d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 29 00:40:48.368774 containerd[1577]: time="2025-10-29T00:40:48.368714067Z" level=info msg="Container 866ef47cd179529201f63fe65b5ec2a95f2e8ca6b0cfe66de3ee37e191228c4b: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:40:48.371700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2327337426.mount: Deactivated successfully. Oct 29 00:40:48.388006 containerd[1577]: time="2025-10-29T00:40:48.387925553Z" level=info msg="CreateContainer within sandbox \"09a968eb8435cdf5fe33144e35cc1480d71931dd69c0b234b9fe208b677d247d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"866ef47cd179529201f63fe65b5ec2a95f2e8ca6b0cfe66de3ee37e191228c4b\"" Oct 29 00:40:48.393402 containerd[1577]: time="2025-10-29T00:40:48.392700833Z" level=info msg="StartContainer for \"866ef47cd179529201f63fe65b5ec2a95f2e8ca6b0cfe66de3ee37e191228c4b\"" Oct 29 00:40:48.395996 containerd[1577]: time="2025-10-29T00:40:48.395520225Z" level=info msg="connecting to shim 866ef47cd179529201f63fe65b5ec2a95f2e8ca6b0cfe66de3ee37e191228c4b" address="unix:///run/containerd/s/8fe940a5d77716641e1c46b993a551630305ad89731986052bb9e629984abc3d" protocol=ttrpc version=3 Oct 29 00:40:48.425709 systemd[1]: Started cri-containerd-866ef47cd179529201f63fe65b5ec2a95f2e8ca6b0cfe66de3ee37e191228c4b.scope - libcontainer container 866ef47cd179529201f63fe65b5ec2a95f2e8ca6b0cfe66de3ee37e191228c4b. Oct 29 00:40:48.464131 containerd[1577]: time="2025-10-29T00:40:48.463736850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-s95wl,Uid:2b0f3f5e-159e-4b06-b5f4-b7088fcd20c5,Namespace:tigera-operator,Attempt:0,}" Oct 29 00:40:48.503745 containerd[1577]: time="2025-10-29T00:40:48.503685971Z" level=info msg="connecting to shim 108d4ef043a53cb2b6132d9a95356c37b9c62254c4842831c209444815fc702e" address="unix:///run/containerd/s/3f270c0493fdac09f56da5452834df0cc574ceca3aa17133b744a238db5b2016" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:40:48.529323 containerd[1577]: time="2025-10-29T00:40:48.529265769Z" level=info msg="StartContainer for \"866ef47cd179529201f63fe65b5ec2a95f2e8ca6b0cfe66de3ee37e191228c4b\" returns successfully" Oct 29 00:40:48.558680 systemd[1]: Started cri-containerd-108d4ef043a53cb2b6132d9a95356c37b9c62254c4842831c209444815fc702e.scope - libcontainer container 108d4ef043a53cb2b6132d9a95356c37b9c62254c4842831c209444815fc702e. Oct 29 00:40:48.670175 containerd[1577]: time="2025-10-29T00:40:48.670109149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-s95wl,Uid:2b0f3f5e-159e-4b06-b5f4-b7088fcd20c5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"108d4ef043a53cb2b6132d9a95356c37b9c62254c4842831c209444815fc702e\"" Oct 29 00:40:48.675845 containerd[1577]: time="2025-10-29T00:40:48.675784023Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 29 00:40:49.042373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3492855185.mount: Deactivated successfully. Oct 29 00:40:49.170809 kubelet[2754]: E1029 00:40:49.169751 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:49.191827 kubelet[2754]: I1029 00:40:49.191656 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sldq2" podStartSLOduration=2.191495301 podStartE2EDuration="2.191495301s" podCreationTimestamp="2025-10-29 00:40:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:40:49.191245393 +0000 UTC m=+8.252868165" watchObservedRunningTime="2025-10-29 00:40:49.191495301 +0000 UTC m=+8.253118072" Oct 29 00:40:50.346687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3063647911.mount: Deactivated successfully. Oct 29 00:40:51.522025 kubelet[2754]: E1029 00:40:51.521987 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:51.878275 containerd[1577]: time="2025-10-29T00:40:51.878159003Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:40:51.879801 containerd[1577]: time="2025-10-29T00:40:51.879760975Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 29 00:40:51.880462 containerd[1577]: time="2025-10-29T00:40:51.880058929Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:40:51.882008 containerd[1577]: time="2025-10-29T00:40:51.881970450Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:40:51.882810 containerd[1577]: time="2025-10-29T00:40:51.882783155Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.206948841s" Oct 29 00:40:51.882906 containerd[1577]: time="2025-10-29T00:40:51.882893193Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 29 00:40:51.887243 containerd[1577]: time="2025-10-29T00:40:51.887199377Z" level=info msg="CreateContainer within sandbox \"108d4ef043a53cb2b6132d9a95356c37b9c62254c4842831c209444815fc702e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 29 00:40:51.894881 containerd[1577]: time="2025-10-29T00:40:51.894836622Z" level=info msg="Container 8f03e0bb9e567ed33d765260ed5f0e3728c7257d5dc97c303eda66505b2993a2: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:40:51.901003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount6803540.mount: Deactivated successfully. Oct 29 00:40:51.913043 containerd[1577]: time="2025-10-29T00:40:51.912994432Z" level=info msg="CreateContainer within sandbox \"108d4ef043a53cb2b6132d9a95356c37b9c62254c4842831c209444815fc702e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8f03e0bb9e567ed33d765260ed5f0e3728c7257d5dc97c303eda66505b2993a2\"" Oct 29 00:40:51.913964 containerd[1577]: time="2025-10-29T00:40:51.913777724Z" level=info msg="StartContainer for \"8f03e0bb9e567ed33d765260ed5f0e3728c7257d5dc97c303eda66505b2993a2\"" Oct 29 00:40:51.915022 containerd[1577]: time="2025-10-29T00:40:51.914967105Z" level=info msg="connecting to shim 8f03e0bb9e567ed33d765260ed5f0e3728c7257d5dc97c303eda66505b2993a2" address="unix:///run/containerd/s/3f270c0493fdac09f56da5452834df0cc574ceca3aa17133b744a238db5b2016" protocol=ttrpc version=3 Oct 29 00:40:51.942475 systemd[1]: Started cri-containerd-8f03e0bb9e567ed33d765260ed5f0e3728c7257d5dc97c303eda66505b2993a2.scope - libcontainer container 8f03e0bb9e567ed33d765260ed5f0e3728c7257d5dc97c303eda66505b2993a2. Oct 29 00:40:51.985656 containerd[1577]: time="2025-10-29T00:40:51.985591865Z" level=info msg="StartContainer for \"8f03e0bb9e567ed33d765260ed5f0e3728c7257d5dc97c303eda66505b2993a2\" returns successfully" Oct 29 00:40:52.200297 kubelet[2754]: E1029 00:40:52.199117 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:52.221441 kubelet[2754]: I1029 00:40:52.220765 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-s95wl" podStartSLOduration=1.011107532 podStartE2EDuration="4.220741129s" podCreationTimestamp="2025-10-29 00:40:48 +0000 UTC" firstStartedPulling="2025-10-29 00:40:48.674216677 +0000 UTC m=+7.735839447" lastFinishedPulling="2025-10-29 00:40:51.883850282 +0000 UTC m=+10.945473044" observedRunningTime="2025-10-29 00:40:52.209463768 +0000 UTC m=+11.271086538" watchObservedRunningTime="2025-10-29 00:40:52.220741129 +0000 UTC m=+11.282364199" Oct 29 00:40:53.201375 kubelet[2754]: E1029 00:40:53.201194 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:55.683702 kubelet[2754]: E1029 00:40:55.683300 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:55.951807 update_engine[1564]: I20251029 00:40:55.951442 1564 update_attempter.cc:509] Updating boot flags... Oct 29 00:40:56.215454 kubelet[2754]: E1029 00:40:56.210106 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:40:58.714734 sudo[1823]: pam_unix(sudo:session): session closed for user root Oct 29 00:40:58.720412 sshd[1822]: Connection closed by 139.178.89.65 port 52456 Oct 29 00:40:58.719563 sshd-session[1819]: pam_unix(sshd:session): session closed for user core Oct 29 00:40:58.726032 systemd-logind[1563]: Session 7 logged out. Waiting for processes to exit. Oct 29 00:40:58.727582 systemd[1]: sshd@6-64.23.202.85:22-139.178.89.65:52456.service: Deactivated successfully. Oct 29 00:40:58.732458 systemd[1]: session-7.scope: Deactivated successfully. Oct 29 00:40:58.732758 systemd[1]: session-7.scope: Consumed 7.061s CPU time, 167.7M memory peak. Oct 29 00:40:58.740333 systemd-logind[1563]: Removed session 7. Oct 29 00:41:04.743334 systemd[1]: Created slice kubepods-besteffort-pod3cddef21_88b9_4369_8214_f59707d0550d.slice - libcontainer container kubepods-besteffort-pod3cddef21_88b9_4369_8214_f59707d0550d.slice. Oct 29 00:41:04.828580 kubelet[2754]: I1029 00:41:04.828399 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cddef21-88b9-4369-8214-f59707d0550d-tigera-ca-bundle\") pod \"calico-typha-796bb94b9b-gltnc\" (UID: \"3cddef21-88b9-4369-8214-f59707d0550d\") " pod="calico-system/calico-typha-796bb94b9b-gltnc" Oct 29 00:41:04.828580 kubelet[2754]: I1029 00:41:04.828464 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkt8b\" (UniqueName: \"kubernetes.io/projected/3cddef21-88b9-4369-8214-f59707d0550d-kube-api-access-xkt8b\") pod \"calico-typha-796bb94b9b-gltnc\" (UID: \"3cddef21-88b9-4369-8214-f59707d0550d\") " pod="calico-system/calico-typha-796bb94b9b-gltnc" Oct 29 00:41:04.828580 kubelet[2754]: I1029 00:41:04.828483 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3cddef21-88b9-4369-8214-f59707d0550d-typha-certs\") pod \"calico-typha-796bb94b9b-gltnc\" (UID: \"3cddef21-88b9-4369-8214-f59707d0550d\") " pod="calico-system/calico-typha-796bb94b9b-gltnc" Oct 29 00:41:04.906145 systemd[1]: Created slice kubepods-besteffort-podc062fa9a_a279_4d3b_bd19_9cc458492b14.slice - libcontainer container kubepods-besteffort-podc062fa9a_a279_4d3b_bd19_9cc458492b14.slice. Oct 29 00:41:04.929646 kubelet[2754]: I1029 00:41:04.929535 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c062fa9a-a279-4d3b-bd19-9cc458492b14-cni-log-dir\") pod \"calico-node-b4kcr\" (UID: \"c062fa9a-a279-4d3b-bd19-9cc458492b14\") " pod="calico-system/calico-node-b4kcr" Oct 29 00:41:04.929646 kubelet[2754]: I1029 00:41:04.929577 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c062fa9a-a279-4d3b-bd19-9cc458492b14-lib-modules\") pod \"calico-node-b4kcr\" (UID: \"c062fa9a-a279-4d3b-bd19-9cc458492b14\") " pod="calico-system/calico-node-b4kcr" Oct 29 00:41:04.929646 kubelet[2754]: I1029 00:41:04.929593 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c062fa9a-a279-4d3b-bd19-9cc458492b14-policysync\") pod \"calico-node-b4kcr\" (UID: \"c062fa9a-a279-4d3b-bd19-9cc458492b14\") " pod="calico-system/calico-node-b4kcr" Oct 29 00:41:04.929848 kubelet[2754]: I1029 00:41:04.929648 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c062fa9a-a279-4d3b-bd19-9cc458492b14-tigera-ca-bundle\") pod \"calico-node-b4kcr\" (UID: \"c062fa9a-a279-4d3b-bd19-9cc458492b14\") " pod="calico-system/calico-node-b4kcr" Oct 29 00:41:04.929848 kubelet[2754]: I1029 00:41:04.929696 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c062fa9a-a279-4d3b-bd19-9cc458492b14-var-run-calico\") pod \"calico-node-b4kcr\" (UID: \"c062fa9a-a279-4d3b-bd19-9cc458492b14\") " pod="calico-system/calico-node-b4kcr" Oct 29 00:41:04.929848 kubelet[2754]: I1029 00:41:04.929728 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c062fa9a-a279-4d3b-bd19-9cc458492b14-var-lib-calico\") pod \"calico-node-b4kcr\" (UID: \"c062fa9a-a279-4d3b-bd19-9cc458492b14\") " pod="calico-system/calico-node-b4kcr" Oct 29 00:41:04.929848 kubelet[2754]: I1029 00:41:04.929792 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c062fa9a-a279-4d3b-bd19-9cc458492b14-cni-net-dir\") pod \"calico-node-b4kcr\" (UID: \"c062fa9a-a279-4d3b-bd19-9cc458492b14\") " pod="calico-system/calico-node-b4kcr" Oct 29 00:41:04.929848 kubelet[2754]: I1029 00:41:04.929820 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c062fa9a-a279-4d3b-bd19-9cc458492b14-node-certs\") pod \"calico-node-b4kcr\" (UID: \"c062fa9a-a279-4d3b-bd19-9cc458492b14\") " pod="calico-system/calico-node-b4kcr" Oct 29 00:41:04.930028 kubelet[2754]: I1029 00:41:04.929840 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c062fa9a-a279-4d3b-bd19-9cc458492b14-cni-bin-dir\") pod \"calico-node-b4kcr\" (UID: \"c062fa9a-a279-4d3b-bd19-9cc458492b14\") " pod="calico-system/calico-node-b4kcr" Oct 29 00:41:04.930028 kubelet[2754]: I1029 00:41:04.929871 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c062fa9a-a279-4d3b-bd19-9cc458492b14-xtables-lock\") pod \"calico-node-b4kcr\" (UID: \"c062fa9a-a279-4d3b-bd19-9cc458492b14\") " pod="calico-system/calico-node-b4kcr" Oct 29 00:41:04.930028 kubelet[2754]: I1029 00:41:04.929895 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c062fa9a-a279-4d3b-bd19-9cc458492b14-flexvol-driver-host\") pod \"calico-node-b4kcr\" (UID: \"c062fa9a-a279-4d3b-bd19-9cc458492b14\") " pod="calico-system/calico-node-b4kcr" Oct 29 00:41:04.930028 kubelet[2754]: I1029 00:41:04.929912 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsccd\" (UniqueName: \"kubernetes.io/projected/c062fa9a-a279-4d3b-bd19-9cc458492b14-kube-api-access-fsccd\") pod \"calico-node-b4kcr\" (UID: \"c062fa9a-a279-4d3b-bd19-9cc458492b14\") " pod="calico-system/calico-node-b4kcr" Oct 29 00:41:05.034449 kubelet[2754]: E1029 00:41:05.034018 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.034449 kubelet[2754]: W1029 00:41:05.034050 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.034449 kubelet[2754]: E1029 00:41:05.034094 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.034449 kubelet[2754]: E1029 00:41:05.034375 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.034449 kubelet[2754]: W1029 00:41:05.034386 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.034449 kubelet[2754]: E1029 00:41:05.034399 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.034770 kubelet[2754]: E1029 00:41:05.034647 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.034770 kubelet[2754]: W1029 00:41:05.034657 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.034770 kubelet[2754]: E1029 00:41:05.034691 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.036367 kubelet[2754]: E1029 00:41:05.035005 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.036367 kubelet[2754]: W1029 00:41:05.035023 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.036367 kubelet[2754]: E1029 00:41:05.035036 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.036367 kubelet[2754]: E1029 00:41:05.035334 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.036367 kubelet[2754]: W1029 00:41:05.035365 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.036367 kubelet[2754]: E1029 00:41:05.035379 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.036367 kubelet[2754]: E1029 00:41:05.035739 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.036367 kubelet[2754]: W1029 00:41:05.035752 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.036367 kubelet[2754]: E1029 00:41:05.035782 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.036367 kubelet[2754]: E1029 00:41:05.036219 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.037897 kubelet[2754]: W1029 00:41:05.036233 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.037897 kubelet[2754]: E1029 00:41:05.036247 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.037897 kubelet[2754]: E1029 00:41:05.037325 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.037897 kubelet[2754]: W1029 00:41:05.037369 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.037897 kubelet[2754]: E1029 00:41:05.037383 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.049387 kubelet[2754]: E1029 00:41:05.049017 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.050091 kubelet[2754]: W1029 00:41:05.049726 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.050091 kubelet[2754]: E1029 00:41:05.049773 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.070410 kubelet[2754]: E1029 00:41:05.068886 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.070410 kubelet[2754]: W1029 00:41:05.068923 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.070410 kubelet[2754]: E1029 00:41:05.068962 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.071225 kubelet[2754]: E1029 00:41:05.071174 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:05.076062 containerd[1577]: time="2025-10-29T00:41:05.075999820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-796bb94b9b-gltnc,Uid:3cddef21-88b9-4369-8214-f59707d0550d,Namespace:calico-system,Attempt:0,}" Oct 29 00:41:05.160788 kubelet[2754]: E1029 00:41:05.160680 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mlqqf" podUID="a7b26047-6837-40e0-9ca7-4fee0f45a405" Oct 29 00:41:05.187905 containerd[1577]: time="2025-10-29T00:41:05.187822517Z" level=info msg="connecting to shim 797b5e4f5997592878ff0611fdff95efe80db9f91f3ae1c9ae63bedf0e0864a2" address="unix:///run/containerd/s/c73b2e24a7db6092e9db32328d769ad59be33a71fe50ff0637694ebcc2b6c4de" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:41:05.212759 kubelet[2754]: E1029 00:41:05.212712 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:05.215643 containerd[1577]: time="2025-10-29T00:41:05.215600837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b4kcr,Uid:c062fa9a-a279-4d3b-bd19-9cc458492b14,Namespace:calico-system,Attempt:0,}" Oct 29 00:41:05.229777 kubelet[2754]: E1029 00:41:05.229718 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.229777 kubelet[2754]: W1029 00:41:05.229749 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.230252 kubelet[2754]: E1029 00:41:05.230220 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.231570 kubelet[2754]: E1029 00:41:05.231529 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.231570 kubelet[2754]: W1029 00:41:05.231553 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.231758 kubelet[2754]: E1029 00:41:05.231578 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.232162 kubelet[2754]: E1029 00:41:05.232129 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.232162 kubelet[2754]: W1029 00:41:05.232144 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.232608 kubelet[2754]: E1029 00:41:05.232177 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.233376 kubelet[2754]: E1029 00:41:05.233328 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.233628 kubelet[2754]: W1029 00:41:05.233568 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.234112 kubelet[2754]: E1029 00:41:05.233596 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.235759 kubelet[2754]: E1029 00:41:05.235456 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.235759 kubelet[2754]: W1029 00:41:05.235472 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.235759 kubelet[2754]: E1029 00:41:05.235488 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.236289 kubelet[2754]: E1029 00:41:05.236078 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.236289 kubelet[2754]: W1029 00:41:05.236100 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.236289 kubelet[2754]: E1029 00:41:05.236117 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.237237 kubelet[2754]: E1029 00:41:05.237070 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.237237 kubelet[2754]: W1029 00:41:05.237088 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.237237 kubelet[2754]: E1029 00:41:05.237105 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.237604 kubelet[2754]: E1029 00:41:05.237584 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.237896 kubelet[2754]: W1029 00:41:05.237711 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.237896 kubelet[2754]: E1029 00:41:05.237729 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.238560 kubelet[2754]: E1029 00:41:05.238253 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.238560 kubelet[2754]: W1029 00:41:05.238270 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.238560 kubelet[2754]: E1029 00:41:05.238285 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.239538 kubelet[2754]: E1029 00:41:05.239486 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.239701 kubelet[2754]: W1029 00:41:05.239637 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.239701 kubelet[2754]: E1029 00:41:05.239656 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.240739 kubelet[2754]: E1029 00:41:05.240659 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.240739 kubelet[2754]: W1029 00:41:05.240674 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.240739 kubelet[2754]: E1029 00:41:05.240686 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.242240 kubelet[2754]: E1029 00:41:05.241807 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.242240 kubelet[2754]: W1029 00:41:05.241909 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.242240 kubelet[2754]: E1029 00:41:05.241927 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.243461 kubelet[2754]: E1029 00:41:05.243430 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.243739 kubelet[2754]: W1029 00:41:05.243594 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.244015 kubelet[2754]: E1029 00:41:05.243820 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.244208 kubelet[2754]: E1029 00:41:05.244196 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.244297 kubelet[2754]: W1029 00:41:05.244285 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.244395 kubelet[2754]: E1029 00:41:05.244385 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.244985 kubelet[2754]: E1029 00:41:05.244828 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.244985 kubelet[2754]: W1029 00:41:05.244845 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.244985 kubelet[2754]: E1029 00:41:05.244859 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.245191 kubelet[2754]: E1029 00:41:05.245179 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.245271 kubelet[2754]: W1029 00:41:05.245259 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.245429 kubelet[2754]: E1029 00:41:05.245326 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.246585 systemd[1]: Started cri-containerd-797b5e4f5997592878ff0611fdff95efe80db9f91f3ae1c9ae63bedf0e0864a2.scope - libcontainer container 797b5e4f5997592878ff0611fdff95efe80db9f91f3ae1c9ae63bedf0e0864a2. Oct 29 00:41:05.247572 kubelet[2754]: E1029 00:41:05.247427 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.247572 kubelet[2754]: W1029 00:41:05.247446 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.247572 kubelet[2754]: E1029 00:41:05.247464 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.247899 kubelet[2754]: E1029 00:41:05.247787 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.247899 kubelet[2754]: W1029 00:41:05.247800 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.247899 kubelet[2754]: E1029 00:41:05.247813 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.248219 kubelet[2754]: E1029 00:41:05.248078 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.248219 kubelet[2754]: W1029 00:41:05.248089 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.248219 kubelet[2754]: E1029 00:41:05.248099 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.248723 kubelet[2754]: E1029 00:41:05.248443 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.248723 kubelet[2754]: W1029 00:41:05.248454 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.248723 kubelet[2754]: E1029 00:41:05.248467 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.249367 kubelet[2754]: E1029 00:41:05.249152 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.249367 kubelet[2754]: W1029 00:41:05.249166 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.249367 kubelet[2754]: E1029 00:41:05.249181 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.249367 kubelet[2754]: I1029 00:41:05.249213 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7b26047-6837-40e0-9ca7-4fee0f45a405-kubelet-dir\") pod \"csi-node-driver-mlqqf\" (UID: \"a7b26047-6837-40e0-9ca7-4fee0f45a405\") " pod="calico-system/csi-node-driver-mlqqf" Oct 29 00:41:05.249954 kubelet[2754]: E1029 00:41:05.249707 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.249954 kubelet[2754]: W1029 00:41:05.249722 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.249954 kubelet[2754]: E1029 00:41:05.249734 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.249954 kubelet[2754]: I1029 00:41:05.249753 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a7b26047-6837-40e0-9ca7-4fee0f45a405-varrun\") pod \"csi-node-driver-mlqqf\" (UID: \"a7b26047-6837-40e0-9ca7-4fee0f45a405\") " pod="calico-system/csi-node-driver-mlqqf" Oct 29 00:41:05.250328 kubelet[2754]: E1029 00:41:05.250310 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.250645 kubelet[2754]: W1029 00:41:05.250482 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.250645 kubelet[2754]: E1029 00:41:05.250510 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.250645 kubelet[2754]: I1029 00:41:05.250533 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a7b26047-6837-40e0-9ca7-4fee0f45a405-registration-dir\") pod \"csi-node-driver-mlqqf\" (UID: \"a7b26047-6837-40e0-9ca7-4fee0f45a405\") " pod="calico-system/csi-node-driver-mlqqf" Oct 29 00:41:05.251574 kubelet[2754]: E1029 00:41:05.251394 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.251574 kubelet[2754]: W1029 00:41:05.251409 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.251574 kubelet[2754]: E1029 00:41:05.251434 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.251574 kubelet[2754]: I1029 00:41:05.251454 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd42b\" (UniqueName: \"kubernetes.io/projected/a7b26047-6837-40e0-9ca7-4fee0f45a405-kube-api-access-sd42b\") pod \"csi-node-driver-mlqqf\" (UID: \"a7b26047-6837-40e0-9ca7-4fee0f45a405\") " pod="calico-system/csi-node-driver-mlqqf" Oct 29 00:41:05.254291 kubelet[2754]: E1029 00:41:05.254262 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.255295 kubelet[2754]: W1029 00:41:05.255264 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.255456 kubelet[2754]: E1029 00:41:05.255441 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.255531 kubelet[2754]: I1029 00:41:05.255520 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a7b26047-6837-40e0-9ca7-4fee0f45a405-socket-dir\") pod \"csi-node-driver-mlqqf\" (UID: \"a7b26047-6837-40e0-9ca7-4fee0f45a405\") " pod="calico-system/csi-node-driver-mlqqf" Oct 29 00:41:05.257439 kubelet[2754]: E1029 00:41:05.257291 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.257439 kubelet[2754]: W1029 00:41:05.257326 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.257439 kubelet[2754]: E1029 00:41:05.257368 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.259085 kubelet[2754]: E1029 00:41:05.258779 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.259085 kubelet[2754]: W1029 00:41:05.258825 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.259085 kubelet[2754]: E1029 00:41:05.258859 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.260538 kubelet[2754]: E1029 00:41:05.260515 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.260751 kubelet[2754]: W1029 00:41:05.260687 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.260846 kubelet[2754]: E1029 00:41:05.260830 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.264876 kubelet[2754]: E1029 00:41:05.264720 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.265057 kubelet[2754]: W1029 00:41:05.265034 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.265196 kubelet[2754]: E1029 00:41:05.265176 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.267279 kubelet[2754]: E1029 00:41:05.267197 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.267279 kubelet[2754]: W1029 00:41:05.267222 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.267279 kubelet[2754]: E1029 00:41:05.267249 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.268453 kubelet[2754]: E1029 00:41:05.268070 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.268453 kubelet[2754]: W1029 00:41:05.268087 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.268453 kubelet[2754]: E1029 00:41:05.268108 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.268847 kubelet[2754]: E1029 00:41:05.268825 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.269022 kubelet[2754]: W1029 00:41:05.268963 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.269022 kubelet[2754]: E1029 00:41:05.268990 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.269574 kubelet[2754]: E1029 00:41:05.269552 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.269803 kubelet[2754]: W1029 00:41:05.269667 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.269803 kubelet[2754]: E1029 00:41:05.269693 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.270357 kubelet[2754]: E1029 00:41:05.270310 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.270357 kubelet[2754]: W1029 00:41:05.270327 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.270749 kubelet[2754]: E1029 00:41:05.270510 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.271125 kubelet[2754]: E1029 00:41:05.271109 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.271326 kubelet[2754]: W1029 00:41:05.271227 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.271326 kubelet[2754]: E1029 00:41:05.271249 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.285463 containerd[1577]: time="2025-10-29T00:41:05.285251064Z" level=info msg="connecting to shim 374f0771662befed8cb6dfc2abc6c3444ab82cb4bf2fb37c6fc3fc5194460033" address="unix:///run/containerd/s/03f87e3d4248d53e1ccc7e9ac3069a78bfa71ececccae15159d4c2256e1e5f94" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:41:05.332675 systemd[1]: Started cri-containerd-374f0771662befed8cb6dfc2abc6c3444ab82cb4bf2fb37c6fc3fc5194460033.scope - libcontainer container 374f0771662befed8cb6dfc2abc6c3444ab82cb4bf2fb37c6fc3fc5194460033. Oct 29 00:41:05.357287 kubelet[2754]: E1029 00:41:05.357167 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.357287 kubelet[2754]: W1029 00:41:05.357206 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.357835 kubelet[2754]: E1029 00:41:05.357406 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.358109 kubelet[2754]: E1029 00:41:05.358087 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.359501 kubelet[2754]: W1029 00:41:05.359467 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.359805 kubelet[2754]: E1029 00:41:05.359630 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.360373 kubelet[2754]: E1029 00:41:05.360336 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.360601 kubelet[2754]: W1029 00:41:05.360533 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.360601 kubelet[2754]: E1029 00:41:05.360559 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.361190 kubelet[2754]: E1029 00:41:05.361132 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.361190 kubelet[2754]: W1029 00:41:05.361146 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.361190 kubelet[2754]: E1029 00:41:05.361160 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.361513 kubelet[2754]: E1029 00:41:05.361461 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.361513 kubelet[2754]: W1029 00:41:05.361481 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.361513 kubelet[2754]: E1029 00:41:05.361500 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.361908 kubelet[2754]: E1029 00:41:05.361883 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.361908 kubelet[2754]: W1029 00:41:05.361896 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.361908 kubelet[2754]: E1029 00:41:05.361907 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.362613 kubelet[2754]: E1029 00:41:05.362472 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.362613 kubelet[2754]: W1029 00:41:05.362487 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.362613 kubelet[2754]: E1029 00:41:05.362499 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.363490 kubelet[2754]: E1029 00:41:05.363201 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.363490 kubelet[2754]: W1029 00:41:05.363217 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.363490 kubelet[2754]: E1029 00:41:05.363233 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.365626 kubelet[2754]: E1029 00:41:05.365555 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.365626 kubelet[2754]: W1029 00:41:05.365576 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.365626 kubelet[2754]: E1029 00:41:05.365596 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.365976 kubelet[2754]: E1029 00:41:05.365815 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.365976 kubelet[2754]: W1029 00:41:05.365824 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.365976 kubelet[2754]: E1029 00:41:05.365835 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.366054 kubelet[2754]: E1029 00:41:05.366013 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.366054 kubelet[2754]: W1029 00:41:05.366020 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.366054 kubelet[2754]: E1029 00:41:05.366028 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.367242 kubelet[2754]: E1029 00:41:05.367213 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.367242 kubelet[2754]: W1029 00:41:05.367230 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.367315 kubelet[2754]: E1029 00:41:05.367248 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.367482 kubelet[2754]: E1029 00:41:05.367466 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.367482 kubelet[2754]: W1029 00:41:05.367477 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.367612 kubelet[2754]: E1029 00:41:05.367486 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.367647 kubelet[2754]: E1029 00:41:05.367641 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.367685 kubelet[2754]: W1029 00:41:05.367648 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.367685 kubelet[2754]: E1029 00:41:05.367655 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.367806 kubelet[2754]: E1029 00:41:05.367793 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.367806 kubelet[2754]: W1029 00:41:05.367802 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.367898 kubelet[2754]: E1029 00:41:05.367809 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.368034 kubelet[2754]: E1029 00:41:05.368018 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.368034 kubelet[2754]: W1029 00:41:05.368030 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.368115 kubelet[2754]: E1029 00:41:05.368045 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.369566 kubelet[2754]: E1029 00:41:05.369539 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.369566 kubelet[2754]: W1029 00:41:05.369561 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.369566 kubelet[2754]: E1029 00:41:05.369579 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.369900 kubelet[2754]: E1029 00:41:05.369826 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.369900 kubelet[2754]: W1029 00:41:05.369839 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.369900 kubelet[2754]: E1029 00:41:05.369855 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.370603 kubelet[2754]: E1029 00:41:05.370556 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.370603 kubelet[2754]: W1029 00:41:05.370573 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.370603 kubelet[2754]: E1029 00:41:05.370586 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.371453 kubelet[2754]: E1029 00:41:05.371411 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.371453 kubelet[2754]: W1029 00:41:05.371424 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.371453 kubelet[2754]: E1029 00:41:05.371437 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.371904 kubelet[2754]: E1029 00:41:05.371835 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.371904 kubelet[2754]: W1029 00:41:05.371846 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.371904 kubelet[2754]: E1029 00:41:05.371857 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.373074 kubelet[2754]: E1029 00:41:05.373057 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.373250 kubelet[2754]: W1029 00:41:05.373156 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.373250 kubelet[2754]: E1029 00:41:05.373172 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.374905 kubelet[2754]: E1029 00:41:05.374089 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.374905 kubelet[2754]: W1029 00:41:05.374104 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.374905 kubelet[2754]: E1029 00:41:05.374849 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.377936 kubelet[2754]: E1029 00:41:05.377723 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.377936 kubelet[2754]: W1029 00:41:05.377742 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.377936 kubelet[2754]: E1029 00:41:05.377764 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.378217 kubelet[2754]: E1029 00:41:05.378159 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.378217 kubelet[2754]: W1029 00:41:05.378170 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.378217 kubelet[2754]: E1029 00:41:05.378182 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.405509 kubelet[2754]: E1029 00:41:05.405308 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:05.405509 kubelet[2754]: W1029 00:41:05.405373 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:05.405509 kubelet[2754]: E1029 00:41:05.405398 2754 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:05.447565 containerd[1577]: time="2025-10-29T00:41:05.447504589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b4kcr,Uid:c062fa9a-a279-4d3b-bd19-9cc458492b14,Namespace:calico-system,Attempt:0,} returns sandbox id \"374f0771662befed8cb6dfc2abc6c3444ab82cb4bf2fb37c6fc3fc5194460033\"" Oct 29 00:41:05.448777 kubelet[2754]: E1029 00:41:05.448753 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:05.450545 containerd[1577]: time="2025-10-29T00:41:05.450444011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 29 00:41:05.503050 containerd[1577]: time="2025-10-29T00:41:05.502991242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-796bb94b9b-gltnc,Uid:3cddef21-88b9-4369-8214-f59707d0550d,Namespace:calico-system,Attempt:0,} returns sandbox id \"797b5e4f5997592878ff0611fdff95efe80db9f91f3ae1c9ae63bedf0e0864a2\"" Oct 29 00:41:05.505042 kubelet[2754]: E1029 00:41:05.504989 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:06.871510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3227970589.mount: Deactivated successfully. Oct 29 00:41:06.987934 containerd[1577]: time="2025-10-29T00:41:06.987323453Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:06.988797 containerd[1577]: time="2025-10-29T00:41:06.988770201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Oct 29 00:41:06.989250 containerd[1577]: time="2025-10-29T00:41:06.989229453Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:06.993003 containerd[1577]: time="2025-10-29T00:41:06.992495197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:06.995126 containerd[1577]: time="2025-10-29T00:41:06.995087882Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.544592963s" Oct 29 00:41:06.996412 containerd[1577]: time="2025-10-29T00:41:06.996385871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 29 00:41:07.000336 containerd[1577]: time="2025-10-29T00:41:07.000020259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 29 00:41:07.007383 containerd[1577]: time="2025-10-29T00:41:07.007130182Z" level=info msg="CreateContainer within sandbox \"374f0771662befed8cb6dfc2abc6c3444ab82cb4bf2fb37c6fc3fc5194460033\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 29 00:41:07.014389 containerd[1577]: time="2025-10-29T00:41:07.014337579Z" level=info msg="Container 996f8b646afcfbdf3a37e5893e57888a89d01b2a85fe1e5dc8790a25bc39e635: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:41:07.027893 containerd[1577]: time="2025-10-29T00:41:07.027849906Z" level=info msg="CreateContainer within sandbox \"374f0771662befed8cb6dfc2abc6c3444ab82cb4bf2fb37c6fc3fc5194460033\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"996f8b646afcfbdf3a37e5893e57888a89d01b2a85fe1e5dc8790a25bc39e635\"" Oct 29 00:41:07.029158 containerd[1577]: time="2025-10-29T00:41:07.029123970Z" level=info msg="StartContainer for \"996f8b646afcfbdf3a37e5893e57888a89d01b2a85fe1e5dc8790a25bc39e635\"" Oct 29 00:41:07.030596 containerd[1577]: time="2025-10-29T00:41:07.030564075Z" level=info msg="connecting to shim 996f8b646afcfbdf3a37e5893e57888a89d01b2a85fe1e5dc8790a25bc39e635" address="unix:///run/containerd/s/03f87e3d4248d53e1ccc7e9ac3069a78bfa71ececccae15159d4c2256e1e5f94" protocol=ttrpc version=3 Oct 29 00:41:07.073846 systemd[1]: Started cri-containerd-996f8b646afcfbdf3a37e5893e57888a89d01b2a85fe1e5dc8790a25bc39e635.scope - libcontainer container 996f8b646afcfbdf3a37e5893e57888a89d01b2a85fe1e5dc8790a25bc39e635. Oct 29 00:41:07.118558 kubelet[2754]: E1029 00:41:07.118268 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mlqqf" podUID="a7b26047-6837-40e0-9ca7-4fee0f45a405" Oct 29 00:41:07.138754 containerd[1577]: time="2025-10-29T00:41:07.138615300Z" level=info msg="StartContainer for \"996f8b646afcfbdf3a37e5893e57888a89d01b2a85fe1e5dc8790a25bc39e635\" returns successfully" Oct 29 00:41:07.156537 systemd[1]: cri-containerd-996f8b646afcfbdf3a37e5893e57888a89d01b2a85fe1e5dc8790a25bc39e635.scope: Deactivated successfully. Oct 29 00:41:07.180054 containerd[1577]: time="2025-10-29T00:41:07.179883449Z" level=info msg="received exit event container_id:\"996f8b646afcfbdf3a37e5893e57888a89d01b2a85fe1e5dc8790a25bc39e635\" id:\"996f8b646afcfbdf3a37e5893e57888a89d01b2a85fe1e5dc8790a25bc39e635\" pid:3377 exited_at:{seconds:1761698467 nanos:160231971}" Oct 29 00:41:07.243760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-996f8b646afcfbdf3a37e5893e57888a89d01b2a85fe1e5dc8790a25bc39e635-rootfs.mount: Deactivated successfully. Oct 29 00:41:07.266932 containerd[1577]: time="2025-10-29T00:41:07.266109880Z" level=info msg="TaskExit event in podsandbox handler container_id:\"996f8b646afcfbdf3a37e5893e57888a89d01b2a85fe1e5dc8790a25bc39e635\" id:\"996f8b646afcfbdf3a37e5893e57888a89d01b2a85fe1e5dc8790a25bc39e635\" pid:3377 exited_at:{seconds:1761698467 nanos:160231971}" Oct 29 00:41:07.275300 kubelet[2754]: E1029 00:41:07.275266 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:08.260647 kubelet[2754]: E1029 00:41:08.260615 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:09.116684 kubelet[2754]: E1029 00:41:09.116636 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mlqqf" podUID="a7b26047-6837-40e0-9ca7-4fee0f45a405" Oct 29 00:41:09.524758 containerd[1577]: time="2025-10-29T00:41:09.524618373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:09.525766 containerd[1577]: time="2025-10-29T00:41:09.525703783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Oct 29 00:41:09.527691 containerd[1577]: time="2025-10-29T00:41:09.526444007Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:09.528671 containerd[1577]: time="2025-10-29T00:41:09.528622240Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:09.530321 containerd[1577]: time="2025-10-29T00:41:09.529538207Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.528495327s" Oct 29 00:41:09.530321 containerd[1577]: time="2025-10-29T00:41:09.529587435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 29 00:41:09.572092 containerd[1577]: time="2025-10-29T00:41:09.572035726Z" level=info msg="CreateContainer within sandbox \"797b5e4f5997592878ff0611fdff95efe80db9f91f3ae1c9ae63bedf0e0864a2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 29 00:41:09.572455 containerd[1577]: time="2025-10-29T00:41:09.572419728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 29 00:41:09.582694 containerd[1577]: time="2025-10-29T00:41:09.582639921Z" level=info msg="Container 82658e9e9a61e9f70a3a32986881edc3958bc0745f841f70df07d28df9d35124: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:41:09.588870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4168152214.mount: Deactivated successfully. Oct 29 00:41:09.600734 containerd[1577]: time="2025-10-29T00:41:09.600687729Z" level=info msg="CreateContainer within sandbox \"797b5e4f5997592878ff0611fdff95efe80db9f91f3ae1c9ae63bedf0e0864a2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"82658e9e9a61e9f70a3a32986881edc3958bc0745f841f70df07d28df9d35124\"" Oct 29 00:41:09.601699 containerd[1577]: time="2025-10-29T00:41:09.601667903Z" level=info msg="StartContainer for \"82658e9e9a61e9f70a3a32986881edc3958bc0745f841f70df07d28df9d35124\"" Oct 29 00:41:09.603442 containerd[1577]: time="2025-10-29T00:41:09.603411941Z" level=info msg="connecting to shim 82658e9e9a61e9f70a3a32986881edc3958bc0745f841f70df07d28df9d35124" address="unix:///run/containerd/s/c73b2e24a7db6092e9db32328d769ad59be33a71fe50ff0637694ebcc2b6c4de" protocol=ttrpc version=3 Oct 29 00:41:09.638587 systemd[1]: Started cri-containerd-82658e9e9a61e9f70a3a32986881edc3958bc0745f841f70df07d28df9d35124.scope - libcontainer container 82658e9e9a61e9f70a3a32986881edc3958bc0745f841f70df07d28df9d35124. Oct 29 00:41:09.715859 containerd[1577]: time="2025-10-29T00:41:09.715810317Z" level=info msg="StartContainer for \"82658e9e9a61e9f70a3a32986881edc3958bc0745f841f70df07d28df9d35124\" returns successfully" Oct 29 00:41:10.277337 kubelet[2754]: E1029 00:41:10.276912 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:11.116924 kubelet[2754]: E1029 00:41:11.116459 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mlqqf" podUID="a7b26047-6837-40e0-9ca7-4fee0f45a405" Oct 29 00:41:11.279082 kubelet[2754]: I1029 00:41:11.279044 2754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 00:41:11.279619 kubelet[2754]: E1029 00:41:11.279477 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:13.079304 containerd[1577]: time="2025-10-29T00:41:13.079256742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:13.080741 containerd[1577]: time="2025-10-29T00:41:13.080412379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 29 00:41:13.081405 containerd[1577]: time="2025-10-29T00:41:13.081374400Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:13.084801 containerd[1577]: time="2025-10-29T00:41:13.084749938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:13.087527 containerd[1577]: time="2025-10-29T00:41:13.087456253Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.514987097s" Oct 29 00:41:13.087527 containerd[1577]: time="2025-10-29T00:41:13.087497712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 29 00:41:13.093281 containerd[1577]: time="2025-10-29T00:41:13.093233043Z" level=info msg="CreateContainer within sandbox \"374f0771662befed8cb6dfc2abc6c3444ab82cb4bf2fb37c6fc3fc5194460033\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 29 00:41:13.117874 kubelet[2754]: E1029 00:41:13.117770 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mlqqf" podUID="a7b26047-6837-40e0-9ca7-4fee0f45a405" Oct 29 00:41:13.121442 containerd[1577]: time="2025-10-29T00:41:13.118752112Z" level=info msg="Container ad4760b18f099d2dbd0144ad050bfe8603c44aa7a887c1fd2d83ae5a8fd42df7: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:41:13.140624 containerd[1577]: time="2025-10-29T00:41:13.140562389Z" level=info msg="CreateContainer within sandbox \"374f0771662befed8cb6dfc2abc6c3444ab82cb4bf2fb37c6fc3fc5194460033\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ad4760b18f099d2dbd0144ad050bfe8603c44aa7a887c1fd2d83ae5a8fd42df7\"" Oct 29 00:41:13.141321 containerd[1577]: time="2025-10-29T00:41:13.141289032Z" level=info msg="StartContainer for \"ad4760b18f099d2dbd0144ad050bfe8603c44aa7a887c1fd2d83ae5a8fd42df7\"" Oct 29 00:41:13.146373 containerd[1577]: time="2025-10-29T00:41:13.145705862Z" level=info msg="connecting to shim ad4760b18f099d2dbd0144ad050bfe8603c44aa7a887c1fd2d83ae5a8fd42df7" address="unix:///run/containerd/s/03f87e3d4248d53e1ccc7e9ac3069a78bfa71ececccae15159d4c2256e1e5f94" protocol=ttrpc version=3 Oct 29 00:41:13.174640 systemd[1]: Started cri-containerd-ad4760b18f099d2dbd0144ad050bfe8603c44aa7a887c1fd2d83ae5a8fd42df7.scope - libcontainer container ad4760b18f099d2dbd0144ad050bfe8603c44aa7a887c1fd2d83ae5a8fd42df7. Oct 29 00:41:13.231879 containerd[1577]: time="2025-10-29T00:41:13.231798659Z" level=info msg="StartContainer for \"ad4760b18f099d2dbd0144ad050bfe8603c44aa7a887c1fd2d83ae5a8fd42df7\" returns successfully" Oct 29 00:41:13.317920 kubelet[2754]: E1029 00:41:13.317621 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:13.349975 kubelet[2754]: I1029 00:41:13.344172 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-796bb94b9b-gltnc" podStartSLOduration=5.300652722 podStartE2EDuration="9.344148085s" podCreationTimestamp="2025-10-29 00:41:04 +0000 UTC" firstStartedPulling="2025-10-29 00:41:05.506480077 +0000 UTC m=+24.568102827" lastFinishedPulling="2025-10-29 00:41:09.549975423 +0000 UTC m=+28.611598190" observedRunningTime="2025-10-29 00:41:10.298517258 +0000 UTC m=+29.360140111" watchObservedRunningTime="2025-10-29 00:41:13.344148085 +0000 UTC m=+32.405770860" Oct 29 00:41:13.899463 systemd[1]: cri-containerd-ad4760b18f099d2dbd0144ad050bfe8603c44aa7a887c1fd2d83ae5a8fd42df7.scope: Deactivated successfully. Oct 29 00:41:13.899746 systemd[1]: cri-containerd-ad4760b18f099d2dbd0144ad050bfe8603c44aa7a887c1fd2d83ae5a8fd42df7.scope: Consumed 664ms CPU time, 171.1M memory peak, 17.2M read from disk, 171.3M written to disk. Oct 29 00:41:13.917561 containerd[1577]: time="2025-10-29T00:41:13.917433193Z" level=info msg="received exit event container_id:\"ad4760b18f099d2dbd0144ad050bfe8603c44aa7a887c1fd2d83ae5a8fd42df7\" id:\"ad4760b18f099d2dbd0144ad050bfe8603c44aa7a887c1fd2d83ae5a8fd42df7\" pid:3480 exited_at:{seconds:1761698473 nanos:905866331}" Oct 29 00:41:13.919788 containerd[1577]: time="2025-10-29T00:41:13.919739178Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad4760b18f099d2dbd0144ad050bfe8603c44aa7a887c1fd2d83ae5a8fd42df7\" id:\"ad4760b18f099d2dbd0144ad050bfe8603c44aa7a887c1fd2d83ae5a8fd42df7\" pid:3480 exited_at:{seconds:1761698473 nanos:905866331}" Oct 29 00:41:13.969821 kubelet[2754]: I1029 00:41:13.969752 2754 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 29 00:41:13.971636 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad4760b18f099d2dbd0144ad050bfe8603c44aa7a887c1fd2d83ae5a8fd42df7-rootfs.mount: Deactivated successfully. Oct 29 00:41:14.023927 systemd[1]: Created slice kubepods-burstable-pod2087d0f1_acdc_4652_a29c_35bb7514dcab.slice - libcontainer container kubepods-burstable-pod2087d0f1_acdc_4652_a29c_35bb7514dcab.slice. Oct 29 00:41:14.035875 systemd[1]: Created slice kubepods-burstable-pod49e1d0d2_4b83_435e_8d3b_7c6d253bcfc8.slice - libcontainer container kubepods-burstable-pod49e1d0d2_4b83_435e_8d3b_7c6d253bcfc8.slice. Oct 29 00:41:14.037354 kubelet[2754]: I1029 00:41:14.037293 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/973f8417-d690-432c-be64-a1fb3fd7b7ed-calico-apiserver-certs\") pod \"calico-apiserver-64df77bb46-p5v5r\" (UID: \"973f8417-d690-432c-be64-a1fb3fd7b7ed\") " pod="calico-apiserver/calico-apiserver-64df77bb46-p5v5r" Oct 29 00:41:14.038501 kubelet[2754]: I1029 00:41:14.037666 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcfzs\" (UniqueName: \"kubernetes.io/projected/49e1d0d2-4b83-435e-8d3b-7c6d253bcfc8-kube-api-access-lcfzs\") pod \"coredns-66bc5c9577-276cs\" (UID: \"49e1d0d2-4b83-435e-8d3b-7c6d253bcfc8\") " pod="kube-system/coredns-66bc5c9577-276cs" Oct 29 00:41:14.038501 kubelet[2754]: I1029 00:41:14.037703 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn2x5\" (UniqueName: \"kubernetes.io/projected/973f8417-d690-432c-be64-a1fb3fd7b7ed-kube-api-access-nn2x5\") pod \"calico-apiserver-64df77bb46-p5v5r\" (UID: \"973f8417-d690-432c-be64-a1fb3fd7b7ed\") " pod="calico-apiserver/calico-apiserver-64df77bb46-p5v5r" Oct 29 00:41:14.038501 kubelet[2754]: I1029 00:41:14.037725 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2087d0f1-acdc-4652-a29c-35bb7514dcab-config-volume\") pod \"coredns-66bc5c9577-zrzhm\" (UID: \"2087d0f1-acdc-4652-a29c-35bb7514dcab\") " pod="kube-system/coredns-66bc5c9577-zrzhm" Oct 29 00:41:14.038501 kubelet[2754]: I1029 00:41:14.037740 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwkjm\" (UniqueName: \"kubernetes.io/projected/2087d0f1-acdc-4652-a29c-35bb7514dcab-kube-api-access-jwkjm\") pod \"coredns-66bc5c9577-zrzhm\" (UID: \"2087d0f1-acdc-4652-a29c-35bb7514dcab\") " pod="kube-system/coredns-66bc5c9577-zrzhm" Oct 29 00:41:14.038501 kubelet[2754]: I1029 00:41:14.037769 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49e1d0d2-4b83-435e-8d3b-7c6d253bcfc8-config-volume\") pod \"coredns-66bc5c9577-276cs\" (UID: \"49e1d0d2-4b83-435e-8d3b-7c6d253bcfc8\") " pod="kube-system/coredns-66bc5c9577-276cs" Oct 29 00:41:14.052717 systemd[1]: Created slice kubepods-besteffort-pod973f8417_d690_432c_be64_a1fb3fd7b7ed.slice - libcontainer container kubepods-besteffort-pod973f8417_d690_432c_be64_a1fb3fd7b7ed.slice. Oct 29 00:41:14.068797 systemd[1]: Created slice kubepods-besteffort-pod2d968843_0f63_4a87_a681_17c760a75ce2.slice - libcontainer container kubepods-besteffort-pod2d968843_0f63_4a87_a681_17c760a75ce2.slice. Oct 29 00:41:14.080018 systemd[1]: Created slice kubepods-besteffort-pod8c11e02f_6f5f_41e7_8866_65b058b61720.slice - libcontainer container kubepods-besteffort-pod8c11e02f_6f5f_41e7_8866_65b058b61720.slice. Oct 29 00:41:14.097049 systemd[1]: Created slice kubepods-besteffort-pod1db4eddb_85a3_4ff2_842c_fb7c92440b55.slice - libcontainer container kubepods-besteffort-pod1db4eddb_85a3_4ff2_842c_fb7c92440b55.slice. Oct 29 00:41:14.107536 systemd[1]: Created slice kubepods-besteffort-podb74f2d9f_7641_4b72_a20e_ab5d1471a2c4.slice - libcontainer container kubepods-besteffort-podb74f2d9f_7641_4b72_a20e_ab5d1471a2c4.slice. Oct 29 00:41:14.138462 kubelet[2754]: I1029 00:41:14.138370 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c11e02f-6f5f-41e7-8866-65b058b61720-tigera-ca-bundle\") pod \"calico-kube-controllers-8f59fd667-w8gqk\" (UID: \"8c11e02f-6f5f-41e7-8866-65b058b61720\") " pod="calico-system/calico-kube-controllers-8f59fd667-w8gqk" Oct 29 00:41:14.139029 kubelet[2754]: I1029 00:41:14.138439 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1db4eddb-85a3-4ff2-842c-fb7c92440b55-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-bdqhh\" (UID: \"1db4eddb-85a3-4ff2-842c-fb7c92440b55\") " pod="calico-system/goldmane-7c778bb748-bdqhh" Oct 29 00:41:14.139029 kubelet[2754]: I1029 00:41:14.138518 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2d968843-0f63-4a87-a681-17c760a75ce2-whisker-backend-key-pair\") pod \"whisker-68bb568dcd-mz9v9\" (UID: \"2d968843-0f63-4a87-a681-17c760a75ce2\") " pod="calico-system/whisker-68bb568dcd-mz9v9" Oct 29 00:41:14.139029 kubelet[2754]: I1029 00:41:14.138555 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnnnm\" (UniqueName: \"kubernetes.io/projected/2d968843-0f63-4a87-a681-17c760a75ce2-kube-api-access-fnnnm\") pod \"whisker-68bb568dcd-mz9v9\" (UID: \"2d968843-0f63-4a87-a681-17c760a75ce2\") " pod="calico-system/whisker-68bb568dcd-mz9v9" Oct 29 00:41:14.139029 kubelet[2754]: I1029 00:41:14.138724 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b74f2d9f-7641-4b72-a20e-ab5d1471a2c4-calico-apiserver-certs\") pod \"calico-apiserver-64df77bb46-w7m6v\" (UID: \"b74f2d9f-7641-4b72-a20e-ab5d1471a2c4\") " pod="calico-apiserver/calico-apiserver-64df77bb46-w7m6v" Oct 29 00:41:14.139029 kubelet[2754]: I1029 00:41:14.138762 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db4eddb-85a3-4ff2-842c-fb7c92440b55-config\") pod \"goldmane-7c778bb748-bdqhh\" (UID: \"1db4eddb-85a3-4ff2-842c-fb7c92440b55\") " pod="calico-system/goldmane-7c778bb748-bdqhh" Oct 29 00:41:14.139283 kubelet[2754]: I1029 00:41:14.138788 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1db4eddb-85a3-4ff2-842c-fb7c92440b55-goldmane-key-pair\") pod \"goldmane-7c778bb748-bdqhh\" (UID: \"1db4eddb-85a3-4ff2-842c-fb7c92440b55\") " pod="calico-system/goldmane-7c778bb748-bdqhh" Oct 29 00:41:14.139283 kubelet[2754]: I1029 00:41:14.138829 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnp97\" (UniqueName: \"kubernetes.io/projected/1db4eddb-85a3-4ff2-842c-fb7c92440b55-kube-api-access-mnp97\") pod \"goldmane-7c778bb748-bdqhh\" (UID: \"1db4eddb-85a3-4ff2-842c-fb7c92440b55\") " pod="calico-system/goldmane-7c778bb748-bdqhh" Oct 29 00:41:14.139283 kubelet[2754]: I1029 00:41:14.138856 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d968843-0f63-4a87-a681-17c760a75ce2-whisker-ca-bundle\") pod \"whisker-68bb568dcd-mz9v9\" (UID: \"2d968843-0f63-4a87-a681-17c760a75ce2\") " pod="calico-system/whisker-68bb568dcd-mz9v9" Oct 29 00:41:14.139283 kubelet[2754]: I1029 00:41:14.138886 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plln6\" (UniqueName: \"kubernetes.io/projected/8c11e02f-6f5f-41e7-8866-65b058b61720-kube-api-access-plln6\") pod \"calico-kube-controllers-8f59fd667-w8gqk\" (UID: \"8c11e02f-6f5f-41e7-8866-65b058b61720\") " pod="calico-system/calico-kube-controllers-8f59fd667-w8gqk" Oct 29 00:41:14.139283 kubelet[2754]: I1029 00:41:14.138912 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txbzf\" (UniqueName: \"kubernetes.io/projected/b74f2d9f-7641-4b72-a20e-ab5d1471a2c4-kube-api-access-txbzf\") pod \"calico-apiserver-64df77bb46-w7m6v\" (UID: \"b74f2d9f-7641-4b72-a20e-ab5d1471a2c4\") " pod="calico-apiserver/calico-apiserver-64df77bb46-w7m6v" Oct 29 00:41:14.333700 kubelet[2754]: E1029 00:41:14.333662 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:14.346952 kubelet[2754]: E1029 00:41:14.346915 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:14.349710 kubelet[2754]: E1029 00:41:14.349672 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:14.351674 containerd[1577]: time="2025-10-29T00:41:14.351219962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zrzhm,Uid:2087d0f1-acdc-4652-a29c-35bb7514dcab,Namespace:kube-system,Attempt:0,}" Oct 29 00:41:14.351674 containerd[1577]: time="2025-10-29T00:41:14.351316932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-276cs,Uid:49e1d0d2-4b83-435e-8d3b-7c6d253bcfc8,Namespace:kube-system,Attempt:0,}" Oct 29 00:41:14.353050 containerd[1577]: time="2025-10-29T00:41:14.353012002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 29 00:41:14.367831 containerd[1577]: time="2025-10-29T00:41:14.367590046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64df77bb46-p5v5r,Uid:973f8417-d690-432c-be64-a1fb3fd7b7ed,Namespace:calico-apiserver,Attempt:0,}" Oct 29 00:41:14.379664 containerd[1577]: time="2025-10-29T00:41:14.379617142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68bb568dcd-mz9v9,Uid:2d968843-0f63-4a87-a681-17c760a75ce2,Namespace:calico-system,Attempt:0,}" Oct 29 00:41:14.398259 containerd[1577]: time="2025-10-29T00:41:14.398173202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8f59fd667-w8gqk,Uid:8c11e02f-6f5f-41e7-8866-65b058b61720,Namespace:calico-system,Attempt:0,}" Oct 29 00:41:14.409871 containerd[1577]: time="2025-10-29T00:41:14.409814740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-bdqhh,Uid:1db4eddb-85a3-4ff2-842c-fb7c92440b55,Namespace:calico-system,Attempt:0,}" Oct 29 00:41:14.424306 containerd[1577]: time="2025-10-29T00:41:14.424082245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64df77bb46-w7m6v,Uid:b74f2d9f-7641-4b72-a20e-ab5d1471a2c4,Namespace:calico-apiserver,Attempt:0,}" Oct 29 00:41:14.719115 containerd[1577]: time="2025-10-29T00:41:14.718737494Z" level=error msg="Failed to destroy network for sandbox \"9c323e28fd030c287b60d93c132ea2a766187fb87dadce3e817f428923de7b5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:41:14.726288 containerd[1577]: time="2025-10-29T00:41:14.722093048Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64df77bb46-p5v5r,Uid:973f8417-d690-432c-be64-a1fb3fd7b7ed,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c323e28fd030c287b60d93c132ea2a766187fb87dadce3e817f428923de7b5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:41:14.727492 kubelet[2754]: E1029 00:41:14.727053 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c323e28fd030c287b60d93c132ea2a766187fb87dadce3e817f428923de7b5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:41:14.727492 kubelet[2754]: E1029 00:41:14.727117 2754 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c323e28fd030c287b60d93c132ea2a766187fb87dadce3e817f428923de7b5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64df77bb46-p5v5r" Oct 29 00:41:14.727492 kubelet[2754]: E1029 00:41:14.727139 2754 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c323e28fd030c287b60d93c132ea2a766187fb87dadce3e817f428923de7b5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64df77bb46-p5v5r" Oct 29 00:41:14.727700 kubelet[2754]: E1029 00:41:14.727209 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64df77bb46-p5v5r_calico-apiserver(973f8417-d690-432c-be64-a1fb3fd7b7ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64df77bb46-p5v5r_calico-apiserver(973f8417-d690-432c-be64-a1fb3fd7b7ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c323e28fd030c287b60d93c132ea2a766187fb87dadce3e817f428923de7b5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64df77bb46-p5v5r" podUID="973f8417-d690-432c-be64-a1fb3fd7b7ed" Oct 29 00:41:14.732661 containerd[1577]: time="2025-10-29T00:41:14.732595115Z" level=error msg="Failed to destroy network for sandbox \"c8146dc97fa6d1cb32792ef3c8e1ad9f3a72020bbea4bbd77d8c9a72a1ce3461\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:41:14.734981 containerd[1577]: time="2025-10-29T00:41:14.734916138Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-276cs,Uid:49e1d0d2-4b83-435e-8d3b-7c6d253bcfc8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8146dc97fa6d1cb32792ef3c8e1ad9f3a72020bbea4bbd77d8c9a72a1ce3461\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:41:14.740823 containerd[1577]: time="2025-10-29T00:41:14.740708791Z" level=error msg="Failed to destroy network for sandbox \"f4ae8e4b4e29b6223b28f39cfe92ba2d5900a36aa0b0aa31adfa2671d6d48491\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:41:14.743261 kubelet[2754]: E1029 00:41:14.743211 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8146dc97fa6d1cb32792ef3c8e1ad9f3a72020bbea4bbd77d8c9a72a1ce3461\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:41:14.743592 kubelet[2754]: E1029 00:41:14.743276 2754 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8146dc97fa6d1cb32792ef3c8e1ad9f3a72020bbea4bbd77d8c9a72a1ce3461\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-276cs" Oct 29 00:41:14.743650 kubelet[2754]: E1029 00:41:14.743298 2754 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8146dc97fa6d1cb32792ef3c8e1ad9f3a72020bbea4bbd77d8c9a72a1ce3461\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-276cs" Oct 29 00:41:14.743737 kubelet[2754]: E1029 00:41:14.743708 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-276cs_kube-system(49e1d0d2-4b83-435e-8d3b-7c6d253bcfc8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-276cs_kube-system(49e1d0d2-4b83-435e-8d3b-7c6d253bcfc8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8146dc97fa6d1cb32792ef3c8e1ad9f3a72020bbea4bbd77d8c9a72a1ce3461\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-276cs" podUID="49e1d0d2-4b83-435e-8d3b-7c6d253bcfc8" Oct 29 00:41:14.745722 containerd[1577]: time="2025-10-29T00:41:14.745656635Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zrzhm,Uid:2087d0f1-acdc-4652-a29c-35bb7514dcab,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4ae8e4b4e29b6223b28f39cfe92ba2d5900a36aa0b0aa31adfa2671d6d48491\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:41:14.746183 kubelet[2754]: E1029 00:41:14.745967 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4ae8e4b4e29b6223b28f39cfe92ba2d5900a36aa0b0aa31adfa2671d6d48491\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:41:14.746278 kubelet[2754]: E1029 00:41:14.746248 2754 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4ae8e4b4e29b6223b28f39cfe92ba2d5900a36aa0b0aa31adfa2671d6d48491\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zrzhm" Oct 29 00:41:14.746320 kubelet[2754]: E1029 00:41:14.746291 2754 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4ae8e4b4e29b6223b28f39cfe92ba2d5900a36aa0b0aa31adfa2671d6d48491\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zrzhm" Oct 29 00:41:14.747003 kubelet[2754]: E1029 00:41:14.746743 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-zrzhm_kube-system(2087d0f1-acdc-4652-a29c-35bb7514dcab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-zrzhm_kube-system(2087d0f1-acdc-4652-a29c-35bb7514dcab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4ae8e4b4e29b6223b28f39cfe92ba2d5900a36aa0b0aa31adfa2671d6d48491\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-zrzhm" podUID="2087d0f1-acdc-4652-a29c-35bb7514dcab" Oct 29 00:41:14.795611 containerd[1577]: time="2025-10-29T00:41:14.795558389Z" level=error msg="Failed to destroy network for sandbox \"52c37f606bf744333b677dacd4a8113c0da55ce45e728af37242370d92eb51c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:41:14.796516 containerd[1577]: time="2025-10-29T00:41:14.796392947Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-bdqhh,Uid:1db4eddb-85a3-4ff2-842c-fb7c92440b55,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"52c37f606bf744333b677dacd4a8113c0da55ce45e728af37242370d92eb51c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:41:14.796892 kubelet[2754]: E1029 00:41:14.796844 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52c37f606bf744333b677dacd4a8113c0da55ce45e728af37242370d92eb51c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:41:14.796958 kubelet[2754]: E1029 00:41:14.796922 2754 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52c37f606bf744333b677dacd4a8113c0da55ce45e728af37242370d92eb51c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-bdqhh" Oct 29 00:41:14.796958 kubelet[2754]: E1029 00:41:14.796944 2754 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52c37f606bf744333b677dacd4a8113c0da55ce45e728af37242370d92eb51c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-bdqhh" Oct 29 00:41:14.797278 kubelet[2754]: E1029 00:41:14.797004 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-bdqhh_calico-system(1db4eddb-85a3-4ff2-842c-fb7c92440b55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-bdqhh_calico-system(1db4eddb-85a3-4ff2-842c-fb7c92440b55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52c37f606bf744333b677dacd4a8113c0da55ce45e728af37242370d92eb51c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-bdqhh" podUID="1db4eddb-85a3-4ff2-842c-fb7c92440b55" Oct 29 00:41:14.802968 containerd[1577]: time="2025-10-29T00:41:14.802922631Z" level=error msg="Failed to destroy network for sandbox \"e177527f891031ebc95eb79bd6e817701873528a8a46279c003d1890e1c9b189\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:41:14.803987 containerd[1577]: time="2025-10-29T00:41:14.803949269Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64df77bb46-w7m6v,Uid:b74f2d9f-7641-4b72-a20e-ab5d1471a2c4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e177527f891031ebc95eb79bd6e817701873528a8a46279c003d1890e1c9b189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:41:14.804428 kubelet[2754]: E1029 00:41:14.804391 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e177527f891031ebc95eb79bd6e817701873528a8a46279c003d1890e1c9b189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:41:14.804519 kubelet[2754]: E1029 00:41:14.804449 2754 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e177527f891031ebc95eb79bd6e817701873528a8a46279c003d1890e1c9b189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64df77bb46-w7m6v" Oct 29 00:41:14.804519 kubelet[2754]: E1029 00:41:14.804470 2754 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e177527f891031ebc95eb79bd6e817701873528a8a46279c003d1890e1c9b189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64df77bb46-w7m6v" Oct 29 00:41:14.804589 kubelet[2754]: E1029 00:41:14.804532 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64df77bb46-w7m6v_calico-apiserver(b74f2d9f-7641-4b72-a20e-ab5d1471a2c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64df77bb46-w7m6v_calico-apiserver(b74f2d9f-7641-4b72-a20e-ab5d1471a2c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e177527f891031ebc95eb79bd6e817701873528a8a46279c003d1890e1c9b189\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64df77bb46-w7m6v" podUID="b74f2d9f-7641-4b72-a20e-ab5d1471a2c4" Oct 29 00:41:14.806494 containerd[1577]: time="2025-10-29T00:41:14.806307303Z" level=error msg="Failed to destroy network for sandbox \"bbaf1c7e44c07517bea6b17ee40764a3a446ed840758d14d6d4d5b5fa14fcc08\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:41:14.806903 containerd[1577]: time="2025-10-29T00:41:14.806389161Z" level=error msg="Failed to destroy network for sandbox \"ec1ec467273550a6934542ccbe064346aee5e29bc2eff1cc867132f63879a31a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:41:14.807918 containerd[1577]: time="2025-10-29T00:41:14.807853332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8f59fd667-w8gqk,Uid:8c11e02f-6f5f-41e7-8866-65b058b61720,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec1ec467273550a6934542ccbe064346aee5e29bc2eff1cc867132f63879a31a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:41:14.808331 kubelet[2754]: E1029 00:41:14.808090 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec1ec467273550a6934542ccbe064346aee5e29bc2eff1cc867132f63879a31a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:41:14.808331 kubelet[2754]: E1029 00:41:14.808170 2754 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec1ec467273550a6934542ccbe064346aee5e29bc2eff1cc867132f63879a31a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8f59fd667-w8gqk" Oct 29 00:41:14.808331 kubelet[2754]: E1029 00:41:14.808194 2754 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec1ec467273550a6934542ccbe064346aee5e29bc2eff1cc867132f63879a31a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8f59fd667-w8gqk" Oct 29 00:41:14.808556 kubelet[2754]: E1029 00:41:14.808253 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8f59fd667-w8gqk_calico-system(8c11e02f-6f5f-41e7-8866-65b058b61720)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8f59fd667-w8gqk_calico-system(8c11e02f-6f5f-41e7-8866-65b058b61720)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec1ec467273550a6934542ccbe064346aee5e29bc2eff1cc867132f63879a31a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8f59fd667-w8gqk" podUID="8c11e02f-6f5f-41e7-8866-65b058b61720" Oct 29 00:41:14.809319 containerd[1577]: time="2025-10-29T00:41:14.809246892Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68bb568dcd-mz9v9,Uid:2d968843-0f63-4a87-a681-17c760a75ce2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbaf1c7e44c07517bea6b17ee40764a3a446ed840758d14d6d4d5b5fa14fcc08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:41:14.809659 kubelet[2754]: E1029 00:41:14.809612 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbaf1c7e44c07517bea6b17ee40764a3a446ed840758d14d6d4d5b5fa14fcc08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:41:14.809815 kubelet[2754]: E1029 00:41:14.809685 2754 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbaf1c7e44c07517bea6b17ee40764a3a446ed840758d14d6d4d5b5fa14fcc08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68bb568dcd-mz9v9" Oct 29 00:41:14.809868 kubelet[2754]: E1029 00:41:14.809811 2754 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbaf1c7e44c07517bea6b17ee40764a3a446ed840758d14d6d4d5b5fa14fcc08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68bb568dcd-mz9v9" Oct 29 00:41:14.810219 kubelet[2754]: E1029 00:41:14.809903 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-68bb568dcd-mz9v9_calico-system(2d968843-0f63-4a87-a681-17c760a75ce2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-68bb568dcd-mz9v9_calico-system(2d968843-0f63-4a87-a681-17c760a75ce2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bbaf1c7e44c07517bea6b17ee40764a3a446ed840758d14d6d4d5b5fa14fcc08\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-68bb568dcd-mz9v9" podUID="2d968843-0f63-4a87-a681-17c760a75ce2" Oct 29 00:41:15.123679 systemd[1]: Created slice kubepods-besteffort-poda7b26047_6837_40e0_9ca7_4fee0f45a405.slice - libcontainer container kubepods-besteffort-poda7b26047_6837_40e0_9ca7_4fee0f45a405.slice. Oct 29 00:41:15.128252 containerd[1577]: time="2025-10-29T00:41:15.127920502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mlqqf,Uid:a7b26047-6837-40e0-9ca7-4fee0f45a405,Namespace:calico-system,Attempt:0,}" Oct 29 00:41:15.208055 containerd[1577]: time="2025-10-29T00:41:15.208010445Z" level=error msg="Failed to destroy network for sandbox \"e722efe8891089521078edabbfaee6f39849394cf836ebd082d3e4da794314d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:41:15.212483 systemd[1]: run-netns-cni\x2d353e868d\x2d3348\x2dc231\x2ddd09\x2d55aeffbaaa55.mount: Deactivated successfully. Oct 29 00:41:15.214220 containerd[1577]: time="2025-10-29T00:41:15.214152597Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mlqqf,Uid:a7b26047-6837-40e0-9ca7-4fee0f45a405,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e722efe8891089521078edabbfaee6f39849394cf836ebd082d3e4da794314d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:41:15.215652 kubelet[2754]: E1029 00:41:15.214745 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e722efe8891089521078edabbfaee6f39849394cf836ebd082d3e4da794314d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:41:15.215652 kubelet[2754]: E1029 00:41:15.214843 2754 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e722efe8891089521078edabbfaee6f39849394cf836ebd082d3e4da794314d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mlqqf" Oct 29 00:41:15.215652 kubelet[2754]: E1029 00:41:15.214870 2754 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e722efe8891089521078edabbfaee6f39849394cf836ebd082d3e4da794314d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mlqqf" Oct 29 00:41:15.216452 kubelet[2754]: E1029 00:41:15.214934 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mlqqf_calico-system(a7b26047-6837-40e0-9ca7-4fee0f45a405)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mlqqf_calico-system(a7b26047-6837-40e0-9ca7-4fee0f45a405)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e722efe8891089521078edabbfaee6f39849394cf836ebd082d3e4da794314d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mlqqf" podUID="a7b26047-6837-40e0-9ca7-4fee0f45a405" Oct 29 00:41:20.243459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1699159978.mount: Deactivated successfully. Oct 29 00:41:20.293257 containerd[1577]: time="2025-10-29T00:41:20.285627984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:20.295274 containerd[1577]: time="2025-10-29T00:41:20.289078867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 29 00:41:20.298931 containerd[1577]: time="2025-10-29T00:41:20.298887332Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:20.301798 containerd[1577]: time="2025-10-29T00:41:20.301144313Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 5.94808156s" Oct 29 00:41:20.302150 containerd[1577]: time="2025-10-29T00:41:20.302126485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 29 00:41:20.302511 containerd[1577]: time="2025-10-29T00:41:20.301598636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:20.331720 containerd[1577]: time="2025-10-29T00:41:20.331663693Z" level=info msg="CreateContainer within sandbox \"374f0771662befed8cb6dfc2abc6c3444ab82cb4bf2fb37c6fc3fc5194460033\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 29 00:41:20.344890 containerd[1577]: time="2025-10-29T00:41:20.343560309Z" level=info msg="Container 3f744bd0cdbf9c7b32ef74461d09bcaf90e60108e405cc543a315eea8fe4330d: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:41:20.362239 containerd[1577]: time="2025-10-29T00:41:20.362184099Z" level=info msg="CreateContainer within sandbox \"374f0771662befed8cb6dfc2abc6c3444ab82cb4bf2fb37c6fc3fc5194460033\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3f744bd0cdbf9c7b32ef74461d09bcaf90e60108e405cc543a315eea8fe4330d\"" Oct 29 00:41:20.364180 containerd[1577]: time="2025-10-29T00:41:20.364115762Z" level=info msg="StartContainer for \"3f744bd0cdbf9c7b32ef74461d09bcaf90e60108e405cc543a315eea8fe4330d\"" Oct 29 00:41:20.372902 containerd[1577]: time="2025-10-29T00:41:20.372819179Z" level=info msg="connecting to shim 3f744bd0cdbf9c7b32ef74461d09bcaf90e60108e405cc543a315eea8fe4330d" address="unix:///run/containerd/s/03f87e3d4248d53e1ccc7e9ac3069a78bfa71ececccae15159d4c2256e1e5f94" protocol=ttrpc version=3 Oct 29 00:41:20.503648 systemd[1]: Started cri-containerd-3f744bd0cdbf9c7b32ef74461d09bcaf90e60108e405cc543a315eea8fe4330d.scope - libcontainer container 3f744bd0cdbf9c7b32ef74461d09bcaf90e60108e405cc543a315eea8fe4330d. Oct 29 00:41:20.569519 containerd[1577]: time="2025-10-29T00:41:20.569476667Z" level=info msg="StartContainer for \"3f744bd0cdbf9c7b32ef74461d09bcaf90e60108e405cc543a315eea8fe4330d\" returns successfully" Oct 29 00:41:20.827470 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 29 00:41:20.828582 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 29 00:41:21.096822 kubelet[2754]: I1029 00:41:21.095694 2754 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2d968843-0f63-4a87-a681-17c760a75ce2-whisker-backend-key-pair\") pod \"2d968843-0f63-4a87-a681-17c760a75ce2\" (UID: \"2d968843-0f63-4a87-a681-17c760a75ce2\") " Oct 29 00:41:21.096822 kubelet[2754]: I1029 00:41:21.095760 2754 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnnnm\" (UniqueName: \"kubernetes.io/projected/2d968843-0f63-4a87-a681-17c760a75ce2-kube-api-access-fnnnm\") pod \"2d968843-0f63-4a87-a681-17c760a75ce2\" (UID: \"2d968843-0f63-4a87-a681-17c760a75ce2\") " Oct 29 00:41:21.096822 kubelet[2754]: I1029 00:41:21.095789 2754 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d968843-0f63-4a87-a681-17c760a75ce2-whisker-ca-bundle\") pod \"2d968843-0f63-4a87-a681-17c760a75ce2\" (UID: \"2d968843-0f63-4a87-a681-17c760a75ce2\") " Oct 29 00:41:21.096822 kubelet[2754]: I1029 00:41:21.096281 2754 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d968843-0f63-4a87-a681-17c760a75ce2-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2d968843-0f63-4a87-a681-17c760a75ce2" (UID: "2d968843-0f63-4a87-a681-17c760a75ce2"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 29 00:41:21.103442 kubelet[2754]: I1029 00:41:21.102966 2754 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d968843-0f63-4a87-a681-17c760a75ce2-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2d968843-0f63-4a87-a681-17c760a75ce2" (UID: "2d968843-0f63-4a87-a681-17c760a75ce2"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 29 00:41:21.104965 kubelet[2754]: I1029 00:41:21.104910 2754 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d968843-0f63-4a87-a681-17c760a75ce2-kube-api-access-fnnnm" (OuterVolumeSpecName: "kube-api-access-fnnnm") pod "2d968843-0f63-4a87-a681-17c760a75ce2" (UID: "2d968843-0f63-4a87-a681-17c760a75ce2"). InnerVolumeSpecName "kube-api-access-fnnnm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 29 00:41:21.127880 systemd[1]: Removed slice kubepods-besteffort-pod2d968843_0f63_4a87_a681_17c760a75ce2.slice - libcontainer container kubepods-besteffort-pod2d968843_0f63_4a87_a681_17c760a75ce2.slice. Oct 29 00:41:21.196745 kubelet[2754]: I1029 00:41:21.196705 2754 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fnnnm\" (UniqueName: \"kubernetes.io/projected/2d968843-0f63-4a87-a681-17c760a75ce2-kube-api-access-fnnnm\") on node \"ci-4487.0.0-n-61970e6314\" DevicePath \"\"" Oct 29 00:41:21.196745 kubelet[2754]: I1029 00:41:21.196735 2754 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d968843-0f63-4a87-a681-17c760a75ce2-whisker-ca-bundle\") on node \"ci-4487.0.0-n-61970e6314\" DevicePath \"\"" Oct 29 00:41:21.196745 kubelet[2754]: I1029 00:41:21.196746 2754 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2d968843-0f63-4a87-a681-17c760a75ce2-whisker-backend-key-pair\") on node \"ci-4487.0.0-n-61970e6314\" DevicePath \"\"" Oct 29 00:41:21.247187 systemd[1]: var-lib-kubelet-pods-2d968843\x2d0f63\x2d4a87\x2da681\x2d17c760a75ce2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfnnnm.mount: Deactivated successfully. Oct 29 00:41:21.247305 systemd[1]: var-lib-kubelet-pods-2d968843\x2d0f63\x2d4a87\x2da681\x2d17c760a75ce2-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 29 00:41:21.375515 kubelet[2754]: E1029 00:41:21.375404 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:21.395374 kubelet[2754]: I1029 00:41:21.394987 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-b4kcr" podStartSLOduration=2.541471967 podStartE2EDuration="17.394971949s" podCreationTimestamp="2025-10-29 00:41:04 +0000 UTC" firstStartedPulling="2025-10-29 00:41:05.449867123 +0000 UTC m=+24.511489873" lastFinishedPulling="2025-10-29 00:41:20.303367104 +0000 UTC m=+39.364989855" observedRunningTime="2025-10-29 00:41:21.394828119 +0000 UTC m=+40.456450890" watchObservedRunningTime="2025-10-29 00:41:21.394971949 +0000 UTC m=+40.456594759" Oct 29 00:41:21.461852 systemd[1]: Created slice kubepods-besteffort-podfbcce175_ed1f_4ce1_b176_b5da4c1b969c.slice - libcontainer container kubepods-besteffort-podfbcce175_ed1f_4ce1_b176_b5da4c1b969c.slice. Oct 29 00:41:21.498472 kubelet[2754]: I1029 00:41:21.498395 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fbcce175-ed1f-4ce1-b176-b5da4c1b969c-whisker-backend-key-pair\") pod \"whisker-7d8c4588fb-cgxbb\" (UID: \"fbcce175-ed1f-4ce1-b176-b5da4c1b969c\") " pod="calico-system/whisker-7d8c4588fb-cgxbb" Oct 29 00:41:21.498472 kubelet[2754]: I1029 00:41:21.498480 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkjmk\" (UniqueName: \"kubernetes.io/projected/fbcce175-ed1f-4ce1-b176-b5da4c1b969c-kube-api-access-qkjmk\") pod \"whisker-7d8c4588fb-cgxbb\" (UID: \"fbcce175-ed1f-4ce1-b176-b5da4c1b969c\") " pod="calico-system/whisker-7d8c4588fb-cgxbb" Oct 29 00:41:21.498697 kubelet[2754]: I1029 00:41:21.498527 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbcce175-ed1f-4ce1-b176-b5da4c1b969c-whisker-ca-bundle\") pod \"whisker-7d8c4588fb-cgxbb\" (UID: \"fbcce175-ed1f-4ce1-b176-b5da4c1b969c\") " pod="calico-system/whisker-7d8c4588fb-cgxbb" Oct 29 00:41:21.770690 containerd[1577]: time="2025-10-29T00:41:21.770544732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d8c4588fb-cgxbb,Uid:fbcce175-ed1f-4ce1-b176-b5da4c1b969c,Namespace:calico-system,Attempt:0,}" Oct 29 00:41:22.064933 systemd-networkd[1487]: cali08a92041301: Link UP Oct 29 00:41:22.065963 systemd-networkd[1487]: cali08a92041301: Gained carrier Oct 29 00:41:22.092431 containerd[1577]: 2025-10-29 00:41:21.807 [INFO][3805] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 00:41:22.092431 containerd[1577]: 2025-10-29 00:41:21.836 [INFO][3805] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--n--61970e6314-k8s-whisker--7d8c4588fb--cgxbb-eth0 whisker-7d8c4588fb- calico-system fbcce175-ed1f-4ce1-b176-b5da4c1b969c 916 0 2025-10-29 00:41:21 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7d8c4588fb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4487.0.0-n-61970e6314 whisker-7d8c4588fb-cgxbb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali08a92041301 [] [] }} ContainerID="3e16456e2c5a103b8dc6e7f159e0f79b9e80cfeb89cb65a8293191badb526b6e" Namespace="calico-system" Pod="whisker-7d8c4588fb-cgxbb" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-whisker--7d8c4588fb--cgxbb-" Oct 29 00:41:22.092431 containerd[1577]: 2025-10-29 00:41:21.836 [INFO][3805] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e16456e2c5a103b8dc6e7f159e0f79b9e80cfeb89cb65a8293191badb526b6e" Namespace="calico-system" Pod="whisker-7d8c4588fb-cgxbb" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-whisker--7d8c4588fb--cgxbb-eth0" Oct 29 00:41:22.092431 containerd[1577]: 2025-10-29 00:41:21.990 [INFO][3817] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e16456e2c5a103b8dc6e7f159e0f79b9e80cfeb89cb65a8293191badb526b6e" HandleID="k8s-pod-network.3e16456e2c5a103b8dc6e7f159e0f79b9e80cfeb89cb65a8293191badb526b6e" Workload="ci--4487.0.0--n--61970e6314-k8s-whisker--7d8c4588fb--cgxbb-eth0" Oct 29 00:41:22.092934 containerd[1577]: 2025-10-29 00:41:21.992 [INFO][3817] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3e16456e2c5a103b8dc6e7f159e0f79b9e80cfeb89cb65a8293191badb526b6e" HandleID="k8s-pod-network.3e16456e2c5a103b8dc6e7f159e0f79b9e80cfeb89cb65a8293191badb526b6e" Workload="ci--4487.0.0--n--61970e6314-k8s-whisker--7d8c4588fb--cgxbb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002aa290), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.0-n-61970e6314", "pod":"whisker-7d8c4588fb-cgxbb", "timestamp":"2025-10-29 00:41:21.990602788 +0000 UTC"}, Hostname:"ci-4487.0.0-n-61970e6314", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:41:22.092934 containerd[1577]: 2025-10-29 00:41:21.992 [INFO][3817] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:41:22.092934 containerd[1577]: 2025-10-29 00:41:21.992 [INFO][3817] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:41:22.092934 containerd[1577]: 2025-10-29 00:41:21.993 [INFO][3817] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-n-61970e6314' Oct 29 00:41:22.092934 containerd[1577]: 2025-10-29 00:41:22.007 [INFO][3817] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3e16456e2c5a103b8dc6e7f159e0f79b9e80cfeb89cb65a8293191badb526b6e" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:22.092934 containerd[1577]: 2025-10-29 00:41:22.016 [INFO][3817] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:22.092934 containerd[1577]: 2025-10-29 00:41:22.023 [INFO][3817] ipam/ipam.go 511: Trying affinity for 192.168.56.64/26 host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:22.092934 containerd[1577]: 2025-10-29 00:41:22.026 [INFO][3817] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.64/26 host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:22.092934 containerd[1577]: 2025-10-29 00:41:22.030 [INFO][3817] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.64/26 host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:22.095252 containerd[1577]: 2025-10-29 00:41:22.030 [INFO][3817] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.56.64/26 handle="k8s-pod-network.3e16456e2c5a103b8dc6e7f159e0f79b9e80cfeb89cb65a8293191badb526b6e" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:22.095252 containerd[1577]: 2025-10-29 00:41:22.032 [INFO][3817] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3e16456e2c5a103b8dc6e7f159e0f79b9e80cfeb89cb65a8293191badb526b6e Oct 29 00:41:22.095252 containerd[1577]: 2025-10-29 00:41:22.037 [INFO][3817] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.56.64/26 handle="k8s-pod-network.3e16456e2c5a103b8dc6e7f159e0f79b9e80cfeb89cb65a8293191badb526b6e" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:22.095252 containerd[1577]: 2025-10-29 00:41:22.044 [INFO][3817] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.56.65/26] block=192.168.56.64/26 handle="k8s-pod-network.3e16456e2c5a103b8dc6e7f159e0f79b9e80cfeb89cb65a8293191badb526b6e" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:22.095252 containerd[1577]: 2025-10-29 00:41:22.044 [INFO][3817] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.65/26] handle="k8s-pod-network.3e16456e2c5a103b8dc6e7f159e0f79b9e80cfeb89cb65a8293191badb526b6e" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:22.095252 containerd[1577]: 2025-10-29 00:41:22.044 [INFO][3817] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:41:22.095252 containerd[1577]: 2025-10-29 00:41:22.044 [INFO][3817] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.56.65/26] IPv6=[] ContainerID="3e16456e2c5a103b8dc6e7f159e0f79b9e80cfeb89cb65a8293191badb526b6e" HandleID="k8s-pod-network.3e16456e2c5a103b8dc6e7f159e0f79b9e80cfeb89cb65a8293191badb526b6e" Workload="ci--4487.0.0--n--61970e6314-k8s-whisker--7d8c4588fb--cgxbb-eth0" Oct 29 00:41:22.095899 containerd[1577]: 2025-10-29 00:41:22.049 [INFO][3805] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e16456e2c5a103b8dc6e7f159e0f79b9e80cfeb89cb65a8293191badb526b6e" Namespace="calico-system" Pod="whisker-7d8c4588fb-cgxbb" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-whisker--7d8c4588fb--cgxbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--61970e6314-k8s-whisker--7d8c4588fb--cgxbb-eth0", GenerateName:"whisker-7d8c4588fb-", Namespace:"calico-system", SelfLink:"", UID:"fbcce175-ed1f-4ce1-b176-b5da4c1b969c", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 41, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d8c4588fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-61970e6314", ContainerID:"", Pod:"whisker-7d8c4588fb-cgxbb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.56.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali08a92041301", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:41:22.095899 containerd[1577]: 2025-10-29 00:41:22.049 [INFO][3805] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.65/32] ContainerID="3e16456e2c5a103b8dc6e7f159e0f79b9e80cfeb89cb65a8293191badb526b6e" Namespace="calico-system" Pod="whisker-7d8c4588fb-cgxbb" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-whisker--7d8c4588fb--cgxbb-eth0" Oct 29 00:41:22.096443 containerd[1577]: 2025-10-29 00:41:22.049 [INFO][3805] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08a92041301 ContainerID="3e16456e2c5a103b8dc6e7f159e0f79b9e80cfeb89cb65a8293191badb526b6e" Namespace="calico-system" Pod="whisker-7d8c4588fb-cgxbb" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-whisker--7d8c4588fb--cgxbb-eth0" Oct 29 00:41:22.096443 containerd[1577]: 2025-10-29 00:41:22.067 [INFO][3805] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e16456e2c5a103b8dc6e7f159e0f79b9e80cfeb89cb65a8293191badb526b6e" Namespace="calico-system" Pod="whisker-7d8c4588fb-cgxbb" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-whisker--7d8c4588fb--cgxbb-eth0" Oct 29 00:41:22.096893 containerd[1577]: 2025-10-29 00:41:22.067 [INFO][3805] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e16456e2c5a103b8dc6e7f159e0f79b9e80cfeb89cb65a8293191badb526b6e" Namespace="calico-system" Pod="whisker-7d8c4588fb-cgxbb" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-whisker--7d8c4588fb--cgxbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--61970e6314-k8s-whisker--7d8c4588fb--cgxbb-eth0", GenerateName:"whisker-7d8c4588fb-", Namespace:"calico-system", SelfLink:"", UID:"fbcce175-ed1f-4ce1-b176-b5da4c1b969c", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 41, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d8c4588fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-61970e6314", ContainerID:"3e16456e2c5a103b8dc6e7f159e0f79b9e80cfeb89cb65a8293191badb526b6e", Pod:"whisker-7d8c4588fb-cgxbb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.56.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali08a92041301", MAC:"ce:de:f6:83:30:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:41:22.097457 containerd[1577]: 2025-10-29 00:41:22.087 [INFO][3805] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e16456e2c5a103b8dc6e7f159e0f79b9e80cfeb89cb65a8293191badb526b6e" Namespace="calico-system" Pod="whisker-7d8c4588fb-cgxbb" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-whisker--7d8c4588fb--cgxbb-eth0" Oct 29 00:41:22.178227 containerd[1577]: time="2025-10-29T00:41:22.178181171Z" level=info msg="connecting to shim 3e16456e2c5a103b8dc6e7f159e0f79b9e80cfeb89cb65a8293191badb526b6e" address="unix:///run/containerd/s/d7617f3c6cbced65386e32e6c9eeee7c049b2a15ac682b79e0a0babd3f5c6b24" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:41:22.222695 systemd[1]: Started cri-containerd-3e16456e2c5a103b8dc6e7f159e0f79b9e80cfeb89cb65a8293191badb526b6e.scope - libcontainer container 3e16456e2c5a103b8dc6e7f159e0f79b9e80cfeb89cb65a8293191badb526b6e. Oct 29 00:41:22.355473 containerd[1577]: time="2025-10-29T00:41:22.355320891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d8c4588fb-cgxbb,Uid:fbcce175-ed1f-4ce1-b176-b5da4c1b969c,Namespace:calico-system,Attempt:0,} returns sandbox id \"3e16456e2c5a103b8dc6e7f159e0f79b9e80cfeb89cb65a8293191badb526b6e\"" Oct 29 00:41:22.364382 containerd[1577]: time="2025-10-29T00:41:22.363777476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 00:41:22.381508 kubelet[2754]: I1029 00:41:22.380569 2754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 00:41:22.383154 kubelet[2754]: E1029 00:41:22.382879 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:22.462975 containerd[1577]: time="2025-10-29T00:41:22.462920854Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3f744bd0cdbf9c7b32ef74461d09bcaf90e60108e405cc543a315eea8fe4330d\" id:\"76125c8e28cbe02ae3cd56eb487796ff0c7be279d9ae14f866f913c7b6356a06\" pid:3884 exit_status:1 exited_at:{seconds:1761698482 nanos:446258954}" Oct 29 00:41:22.676461 containerd[1577]: time="2025-10-29T00:41:22.676176444Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:41:22.677356 containerd[1577]: time="2025-10-29T00:41:22.677279839Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 00:41:22.677642 containerd[1577]: time="2025-10-29T00:41:22.677510283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 00:41:22.677711 kubelet[2754]: E1029 00:41:22.677670 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:41:22.677823 kubelet[2754]: E1029 00:41:22.677730 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:41:22.677876 kubelet[2754]: E1029 00:41:22.677831 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7d8c4588fb-cgxbb_calico-system(fbcce175-ed1f-4ce1-b176-b5da4c1b969c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 00:41:22.680937 containerd[1577]: time="2025-10-29T00:41:22.680546341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 00:41:22.933956 containerd[1577]: time="2025-10-29T00:41:22.933798088Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3f744bd0cdbf9c7b32ef74461d09bcaf90e60108e405cc543a315eea8fe4330d\" id:\"1475a4a490d79f5ddec67d1dbf5e6c91c5c946c814a2bf181aa0ed68de7d1cd2\" pid:3972 exit_status:1 exited_at:{seconds:1761698482 nanos:933434854}" Oct 29 00:41:22.981369 containerd[1577]: time="2025-10-29T00:41:22.981194104Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:41:22.982506 containerd[1577]: time="2025-10-29T00:41:22.982452141Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 00:41:22.982747 containerd[1577]: time="2025-10-29T00:41:22.982699330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 00:41:22.982950 kubelet[2754]: E1029 00:41:22.982897 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:41:22.983033 kubelet[2754]: E1029 00:41:22.982954 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:41:22.983091 kubelet[2754]: E1029 00:41:22.983032 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7d8c4588fb-cgxbb_calico-system(fbcce175-ed1f-4ce1-b176-b5da4c1b969c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 00:41:22.983123 kubelet[2754]: E1029 00:41:22.983074 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d8c4588fb-cgxbb" podUID="fbcce175-ed1f-4ce1-b176-b5da4c1b969c" Oct 29 00:41:23.119182 kubelet[2754]: I1029 00:41:23.119117 2754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d968843-0f63-4a87-a681-17c760a75ce2" path="/var/lib/kubelet/pods/2d968843-0f63-4a87-a681-17c760a75ce2/volumes" Oct 29 00:41:23.264633 systemd-networkd[1487]: cali08a92041301: Gained IPv6LL Oct 29 00:41:23.384190 kubelet[2754]: E1029 00:41:23.383786 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:23.385266 kubelet[2754]: E1029 00:41:23.385102 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d8c4588fb-cgxbb" podUID="fbcce175-ed1f-4ce1-b176-b5da4c1b969c" Oct 29 00:41:23.506776 containerd[1577]: time="2025-10-29T00:41:23.506542322Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3f744bd0cdbf9c7b32ef74461d09bcaf90e60108e405cc543a315eea8fe4330d\" id:\"aa20ad6b3f2bc2fccbfb5bae5cb804cc11457a20152a8485f36d3a45616656bc\" pid:4033 exit_status:1 exited_at:{seconds:1761698483 nanos:506135523}" Oct 29 00:41:26.120801 containerd[1577]: time="2025-10-29T00:41:26.120740183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-bdqhh,Uid:1db4eddb-85a3-4ff2-842c-fb7c92440b55,Namespace:calico-system,Attempt:0,}" Oct 29 00:41:26.122595 kubelet[2754]: E1029 00:41:26.122482 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:26.130043 containerd[1577]: time="2025-10-29T00:41:26.126233188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zrzhm,Uid:2087d0f1-acdc-4652-a29c-35bb7514dcab,Namespace:kube-system,Attempt:0,}" Oct 29 00:41:26.133122 containerd[1577]: time="2025-10-29T00:41:26.131498350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8f59fd667-w8gqk,Uid:8c11e02f-6f5f-41e7-8866-65b058b61720,Namespace:calico-system,Attempt:0,}" Oct 29 00:41:26.459553 systemd-networkd[1487]: cali2b793dc2a14: Link UP Oct 29 00:41:26.461512 systemd-networkd[1487]: cali2b793dc2a14: Gained carrier Oct 29 00:41:26.483561 containerd[1577]: 2025-10-29 00:41:26.272 [INFO][4109] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 00:41:26.483561 containerd[1577]: 2025-10-29 00:41:26.306 [INFO][4109] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--n--61970e6314-k8s-coredns--66bc5c9577--zrzhm-eth0 coredns-66bc5c9577- kube-system 2087d0f1-acdc-4652-a29c-35bb7514dcab 837 0 2025-10-29 00:40:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4487.0.0-n-61970e6314 coredns-66bc5c9577-zrzhm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2b793dc2a14 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="d00550c0d86e24c288c3d0734916a650160461dadedb0d6dc9f78e17a536a1e6" Namespace="kube-system" Pod="coredns-66bc5c9577-zrzhm" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-coredns--66bc5c9577--zrzhm-" Oct 29 00:41:26.483561 containerd[1577]: 2025-10-29 00:41:26.306 [INFO][4109] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d00550c0d86e24c288c3d0734916a650160461dadedb0d6dc9f78e17a536a1e6" Namespace="kube-system" Pod="coredns-66bc5c9577-zrzhm" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-coredns--66bc5c9577--zrzhm-eth0" Oct 29 00:41:26.483561 containerd[1577]: 2025-10-29 00:41:26.371 [INFO][4150] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d00550c0d86e24c288c3d0734916a650160461dadedb0d6dc9f78e17a536a1e6" HandleID="k8s-pod-network.d00550c0d86e24c288c3d0734916a650160461dadedb0d6dc9f78e17a536a1e6" Workload="ci--4487.0.0--n--61970e6314-k8s-coredns--66bc5c9577--zrzhm-eth0" Oct 29 00:41:26.483845 containerd[1577]: 2025-10-29 00:41:26.372 [INFO][4150] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d00550c0d86e24c288c3d0734916a650160461dadedb0d6dc9f78e17a536a1e6" HandleID="k8s-pod-network.d00550c0d86e24c288c3d0734916a650160461dadedb0d6dc9f78e17a536a1e6" Workload="ci--4487.0.0--n--61970e6314-k8s-coredns--66bc5c9577--zrzhm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5220), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4487.0.0-n-61970e6314", "pod":"coredns-66bc5c9577-zrzhm", "timestamp":"2025-10-29 00:41:26.371795458 +0000 UTC"}, Hostname:"ci-4487.0.0-n-61970e6314", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:41:26.483845 containerd[1577]: 2025-10-29 00:41:26.372 [INFO][4150] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:41:26.483845 containerd[1577]: 2025-10-29 00:41:26.372 [INFO][4150] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:41:26.483845 containerd[1577]: 2025-10-29 00:41:26.372 [INFO][4150] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-n-61970e6314' Oct 29 00:41:26.483845 containerd[1577]: 2025-10-29 00:41:26.385 [INFO][4150] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d00550c0d86e24c288c3d0734916a650160461dadedb0d6dc9f78e17a536a1e6" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:26.483845 containerd[1577]: 2025-10-29 00:41:26.403 [INFO][4150] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:26.483845 containerd[1577]: 2025-10-29 00:41:26.418 [INFO][4150] ipam/ipam.go 511: Trying affinity for 192.168.56.64/26 host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:26.483845 containerd[1577]: 2025-10-29 00:41:26.423 [INFO][4150] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.64/26 host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:26.483845 containerd[1577]: 2025-10-29 00:41:26.427 [INFO][4150] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.64/26 host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:26.484100 containerd[1577]: 2025-10-29 00:41:26.427 [INFO][4150] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.56.64/26 handle="k8s-pod-network.d00550c0d86e24c288c3d0734916a650160461dadedb0d6dc9f78e17a536a1e6" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:26.484100 containerd[1577]: 2025-10-29 00:41:26.430 [INFO][4150] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d00550c0d86e24c288c3d0734916a650160461dadedb0d6dc9f78e17a536a1e6 Oct 29 00:41:26.484100 containerd[1577]: 2025-10-29 00:41:26.440 [INFO][4150] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.56.64/26 handle="k8s-pod-network.d00550c0d86e24c288c3d0734916a650160461dadedb0d6dc9f78e17a536a1e6" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:26.484100 containerd[1577]: 2025-10-29 00:41:26.447 [INFO][4150] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.56.66/26] block=192.168.56.64/26 handle="k8s-pod-network.d00550c0d86e24c288c3d0734916a650160461dadedb0d6dc9f78e17a536a1e6" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:26.484100 containerd[1577]: 2025-10-29 00:41:26.447 [INFO][4150] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.66/26] handle="k8s-pod-network.d00550c0d86e24c288c3d0734916a650160461dadedb0d6dc9f78e17a536a1e6" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:26.484100 containerd[1577]: 2025-10-29 00:41:26.447 [INFO][4150] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:41:26.484100 containerd[1577]: 2025-10-29 00:41:26.447 [INFO][4150] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.56.66/26] IPv6=[] ContainerID="d00550c0d86e24c288c3d0734916a650160461dadedb0d6dc9f78e17a536a1e6" HandleID="k8s-pod-network.d00550c0d86e24c288c3d0734916a650160461dadedb0d6dc9f78e17a536a1e6" Workload="ci--4487.0.0--n--61970e6314-k8s-coredns--66bc5c9577--zrzhm-eth0" Oct 29 00:41:26.484268 containerd[1577]: 2025-10-29 00:41:26.454 [INFO][4109] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d00550c0d86e24c288c3d0734916a650160461dadedb0d6dc9f78e17a536a1e6" Namespace="kube-system" Pod="coredns-66bc5c9577-zrzhm" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-coredns--66bc5c9577--zrzhm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--61970e6314-k8s-coredns--66bc5c9577--zrzhm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2087d0f1-acdc-4652-a29c-35bb7514dcab", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 40, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-61970e6314", ContainerID:"", Pod:"coredns-66bc5c9577-zrzhm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b793dc2a14", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:41:26.484268 containerd[1577]: 2025-10-29 00:41:26.454 [INFO][4109] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.66/32] ContainerID="d00550c0d86e24c288c3d0734916a650160461dadedb0d6dc9f78e17a536a1e6" Namespace="kube-system" Pod="coredns-66bc5c9577-zrzhm" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-coredns--66bc5c9577--zrzhm-eth0" Oct 29 00:41:26.484268 containerd[1577]: 2025-10-29 00:41:26.455 [INFO][4109] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b793dc2a14 ContainerID="d00550c0d86e24c288c3d0734916a650160461dadedb0d6dc9f78e17a536a1e6" Namespace="kube-system" Pod="coredns-66bc5c9577-zrzhm" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-coredns--66bc5c9577--zrzhm-eth0" Oct 29 00:41:26.484268 containerd[1577]: 2025-10-29 00:41:26.459 [INFO][4109] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d00550c0d86e24c288c3d0734916a650160461dadedb0d6dc9f78e17a536a1e6" Namespace="kube-system" Pod="coredns-66bc5c9577-zrzhm" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-coredns--66bc5c9577--zrzhm-eth0" Oct 29 00:41:26.484268 containerd[1577]: 2025-10-29 00:41:26.460 [INFO][4109] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d00550c0d86e24c288c3d0734916a650160461dadedb0d6dc9f78e17a536a1e6" Namespace="kube-system" Pod="coredns-66bc5c9577-zrzhm" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-coredns--66bc5c9577--zrzhm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--61970e6314-k8s-coredns--66bc5c9577--zrzhm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2087d0f1-acdc-4652-a29c-35bb7514dcab", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 40, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-61970e6314", ContainerID:"d00550c0d86e24c288c3d0734916a650160461dadedb0d6dc9f78e17a536a1e6", Pod:"coredns-66bc5c9577-zrzhm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b793dc2a14", MAC:"32:1d:7b:18:87:50", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:41:26.484873 containerd[1577]: 2025-10-29 00:41:26.476 [INFO][4109] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d00550c0d86e24c288c3d0734916a650160461dadedb0d6dc9f78e17a536a1e6" Namespace="kube-system" Pod="coredns-66bc5c9577-zrzhm" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-coredns--66bc5c9577--zrzhm-eth0" Oct 29 00:41:26.545126 containerd[1577]: time="2025-10-29T00:41:26.544474962Z" level=info msg="connecting to shim d00550c0d86e24c288c3d0734916a650160461dadedb0d6dc9f78e17a536a1e6" address="unix:///run/containerd/s/40b0353ae0ef2bd285891c0e5ecc1d47f85d97ff6394b60ef9149f045118a952" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:41:26.584550 systemd[1]: Started cri-containerd-d00550c0d86e24c288c3d0734916a650160461dadedb0d6dc9f78e17a536a1e6.scope - libcontainer container d00550c0d86e24c288c3d0734916a650160461dadedb0d6dc9f78e17a536a1e6. Oct 29 00:41:26.587067 systemd-networkd[1487]: cali3be66c6acc2: Link UP Oct 29 00:41:26.590902 systemd-networkd[1487]: cali3be66c6acc2: Gained carrier Oct 29 00:41:26.627575 containerd[1577]: 2025-10-29 00:41:26.266 [INFO][4113] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 00:41:26.627575 containerd[1577]: 2025-10-29 00:41:26.297 [INFO][4113] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--n--61970e6314-k8s-calico--kube--controllers--8f59fd667--w8gqk-eth0 calico-kube-controllers-8f59fd667- calico-system 8c11e02f-6f5f-41e7-8866-65b058b61720 848 0 2025-10-29 00:41:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8f59fd667 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4487.0.0-n-61970e6314 calico-kube-controllers-8f59fd667-w8gqk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3be66c6acc2 [] [] }} ContainerID="a847a9fe83b33e341e72617dba10d48e76fe8d93784cee320c3f9bf962360d33" Namespace="calico-system" Pod="calico-kube-controllers-8f59fd667-w8gqk" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-calico--kube--controllers--8f59fd667--w8gqk-" Oct 29 00:41:26.627575 containerd[1577]: 2025-10-29 00:41:26.297 [INFO][4113] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a847a9fe83b33e341e72617dba10d48e76fe8d93784cee320c3f9bf962360d33" Namespace="calico-system" Pod="calico-kube-controllers-8f59fd667-w8gqk" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-calico--kube--controllers--8f59fd667--w8gqk-eth0" Oct 29 00:41:26.627575 containerd[1577]: 2025-10-29 00:41:26.406 [INFO][4143] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a847a9fe83b33e341e72617dba10d48e76fe8d93784cee320c3f9bf962360d33" HandleID="k8s-pod-network.a847a9fe83b33e341e72617dba10d48e76fe8d93784cee320c3f9bf962360d33" Workload="ci--4487.0.0--n--61970e6314-k8s-calico--kube--controllers--8f59fd667--w8gqk-eth0" Oct 29 00:41:26.627575 containerd[1577]: 2025-10-29 00:41:26.406 [INFO][4143] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a847a9fe83b33e341e72617dba10d48e76fe8d93784cee320c3f9bf962360d33" HandleID="k8s-pod-network.a847a9fe83b33e341e72617dba10d48e76fe8d93784cee320c3f9bf962360d33" Workload="ci--4487.0.0--n--61970e6314-k8s-calico--kube--controllers--8f59fd667--w8gqk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00028f7c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.0-n-61970e6314", "pod":"calico-kube-controllers-8f59fd667-w8gqk", "timestamp":"2025-10-29 00:41:26.406686584 +0000 UTC"}, Hostname:"ci-4487.0.0-n-61970e6314", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:41:26.627575 containerd[1577]: 2025-10-29 00:41:26.406 [INFO][4143] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:41:26.627575 containerd[1577]: 2025-10-29 00:41:26.448 [INFO][4143] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:41:26.627575 containerd[1577]: 2025-10-29 00:41:26.448 [INFO][4143] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-n-61970e6314' Oct 29 00:41:26.627575 containerd[1577]: 2025-10-29 00:41:26.489 [INFO][4143] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a847a9fe83b33e341e72617dba10d48e76fe8d93784cee320c3f9bf962360d33" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:26.627575 containerd[1577]: 2025-10-29 00:41:26.503 [INFO][4143] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:26.627575 containerd[1577]: 2025-10-29 00:41:26.515 [INFO][4143] ipam/ipam.go 511: Trying affinity for 192.168.56.64/26 host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:26.627575 containerd[1577]: 2025-10-29 00:41:26.519 [INFO][4143] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.64/26 host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:26.627575 containerd[1577]: 2025-10-29 00:41:26.523 [INFO][4143] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.64/26 host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:26.627575 containerd[1577]: 2025-10-29 00:41:26.523 [INFO][4143] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.56.64/26 handle="k8s-pod-network.a847a9fe83b33e341e72617dba10d48e76fe8d93784cee320c3f9bf962360d33" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:26.627575 containerd[1577]: 2025-10-29 00:41:26.527 [INFO][4143] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a847a9fe83b33e341e72617dba10d48e76fe8d93784cee320c3f9bf962360d33 Oct 29 00:41:26.627575 containerd[1577]: 2025-10-29 00:41:26.537 [INFO][4143] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.56.64/26 handle="k8s-pod-network.a847a9fe83b33e341e72617dba10d48e76fe8d93784cee320c3f9bf962360d33" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:26.627575 containerd[1577]: 2025-10-29 00:41:26.553 [INFO][4143] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.56.67/26] block=192.168.56.64/26 handle="k8s-pod-network.a847a9fe83b33e341e72617dba10d48e76fe8d93784cee320c3f9bf962360d33" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:26.627575 containerd[1577]: 2025-10-29 00:41:26.556 [INFO][4143] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.67/26] handle="k8s-pod-network.a847a9fe83b33e341e72617dba10d48e76fe8d93784cee320c3f9bf962360d33" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:26.627575 containerd[1577]: 2025-10-29 00:41:26.560 [INFO][4143] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:41:26.627575 containerd[1577]: 2025-10-29 00:41:26.562 [INFO][4143] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.56.67/26] IPv6=[] ContainerID="a847a9fe83b33e341e72617dba10d48e76fe8d93784cee320c3f9bf962360d33" HandleID="k8s-pod-network.a847a9fe83b33e341e72617dba10d48e76fe8d93784cee320c3f9bf962360d33" Workload="ci--4487.0.0--n--61970e6314-k8s-calico--kube--controllers--8f59fd667--w8gqk-eth0" Oct 29 00:41:26.630258 containerd[1577]: 2025-10-29 00:41:26.579 [INFO][4113] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a847a9fe83b33e341e72617dba10d48e76fe8d93784cee320c3f9bf962360d33" Namespace="calico-system" Pod="calico-kube-controllers-8f59fd667-w8gqk" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-calico--kube--controllers--8f59fd667--w8gqk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--61970e6314-k8s-calico--kube--controllers--8f59fd667--w8gqk-eth0", GenerateName:"calico-kube-controllers-8f59fd667-", Namespace:"calico-system", SelfLink:"", UID:"8c11e02f-6f5f-41e7-8866-65b058b61720", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 41, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8f59fd667", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-61970e6314", ContainerID:"", Pod:"calico-kube-controllers-8f59fd667-w8gqk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.56.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3be66c6acc2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:41:26.630258 containerd[1577]: 2025-10-29 00:41:26.580 [INFO][4113] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.67/32] ContainerID="a847a9fe83b33e341e72617dba10d48e76fe8d93784cee320c3f9bf962360d33" Namespace="calico-system" Pod="calico-kube-controllers-8f59fd667-w8gqk" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-calico--kube--controllers--8f59fd667--w8gqk-eth0" Oct 29 00:41:26.630258 containerd[1577]: 2025-10-29 00:41:26.580 [INFO][4113] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3be66c6acc2 ContainerID="a847a9fe83b33e341e72617dba10d48e76fe8d93784cee320c3f9bf962360d33" Namespace="calico-system" Pod="calico-kube-controllers-8f59fd667-w8gqk" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-calico--kube--controllers--8f59fd667--w8gqk-eth0" Oct 29 00:41:26.630258 containerd[1577]: 2025-10-29 00:41:26.590 [INFO][4113] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a847a9fe83b33e341e72617dba10d48e76fe8d93784cee320c3f9bf962360d33" Namespace="calico-system" Pod="calico-kube-controllers-8f59fd667-w8gqk" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-calico--kube--controllers--8f59fd667--w8gqk-eth0" Oct 29 00:41:26.630258 containerd[1577]: 2025-10-29 00:41:26.592 [INFO][4113] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a847a9fe83b33e341e72617dba10d48e76fe8d93784cee320c3f9bf962360d33" Namespace="calico-system" Pod="calico-kube-controllers-8f59fd667-w8gqk" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-calico--kube--controllers--8f59fd667--w8gqk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--61970e6314-k8s-calico--kube--controllers--8f59fd667--w8gqk-eth0", GenerateName:"calico-kube-controllers-8f59fd667-", Namespace:"calico-system", SelfLink:"", UID:"8c11e02f-6f5f-41e7-8866-65b058b61720", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 41, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8f59fd667", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-61970e6314", ContainerID:"a847a9fe83b33e341e72617dba10d48e76fe8d93784cee320c3f9bf962360d33", Pod:"calico-kube-controllers-8f59fd667-w8gqk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.56.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3be66c6acc2", MAC:"6a:9e:25:f1:83:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:41:26.630258 containerd[1577]: 2025-10-29 00:41:26.620 [INFO][4113] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a847a9fe83b33e341e72617dba10d48e76fe8d93784cee320c3f9bf962360d33" Namespace="calico-system" Pod="calico-kube-controllers-8f59fd667-w8gqk" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-calico--kube--controllers--8f59fd667--w8gqk-eth0" Oct 29 00:41:26.672325 systemd-networkd[1487]: calibc90d009ab0: Link UP Oct 29 00:41:26.674493 systemd-networkd[1487]: calibc90d009ab0: Gained carrier Oct 29 00:41:26.695154 containerd[1577]: time="2025-10-29T00:41:26.695104774Z" level=info msg="connecting to shim a847a9fe83b33e341e72617dba10d48e76fe8d93784cee320c3f9bf962360d33" address="unix:///run/containerd/s/bcfc5c9dcb5e8748390289659f771eeed714572f8565db4b046552732de6f554" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:41:26.713886 containerd[1577]: 2025-10-29 00:41:26.279 [INFO][4106] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 00:41:26.713886 containerd[1577]: 2025-10-29 00:41:26.307 [INFO][4106] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--n--61970e6314-k8s-goldmane--7c778bb748--bdqhh-eth0 goldmane-7c778bb748- calico-system 1db4eddb-85a3-4ff2-842c-fb7c92440b55 846 0 2025-10-29 00:41:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4487.0.0-n-61970e6314 goldmane-7c778bb748-bdqhh eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calibc90d009ab0 [] [] }} ContainerID="3dbd2aeae1f4cc9248eed36a68de14495f49555ff4001415888e18e05d21a0a3" Namespace="calico-system" Pod="goldmane-7c778bb748-bdqhh" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-goldmane--7c778bb748--bdqhh-" Oct 29 00:41:26.713886 containerd[1577]: 2025-10-29 00:41:26.308 [INFO][4106] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3dbd2aeae1f4cc9248eed36a68de14495f49555ff4001415888e18e05d21a0a3" Namespace="calico-system" Pod="goldmane-7c778bb748-bdqhh" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-goldmane--7c778bb748--bdqhh-eth0" Oct 29 00:41:26.713886 containerd[1577]: 2025-10-29 00:41:26.411 [INFO][4149] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3dbd2aeae1f4cc9248eed36a68de14495f49555ff4001415888e18e05d21a0a3" HandleID="k8s-pod-network.3dbd2aeae1f4cc9248eed36a68de14495f49555ff4001415888e18e05d21a0a3" Workload="ci--4487.0.0--n--61970e6314-k8s-goldmane--7c778bb748--bdqhh-eth0" Oct 29 00:41:26.713886 containerd[1577]: 2025-10-29 00:41:26.411 [INFO][4149] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3dbd2aeae1f4cc9248eed36a68de14495f49555ff4001415888e18e05d21a0a3" HandleID="k8s-pod-network.3dbd2aeae1f4cc9248eed36a68de14495f49555ff4001415888e18e05d21a0a3" Workload="ci--4487.0.0--n--61970e6314-k8s-goldmane--7c778bb748--bdqhh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032da50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.0-n-61970e6314", "pod":"goldmane-7c778bb748-bdqhh", "timestamp":"2025-10-29 00:41:26.411474282 +0000 UTC"}, Hostname:"ci-4487.0.0-n-61970e6314", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:41:26.713886 containerd[1577]: 2025-10-29 00:41:26.411 [INFO][4149] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:41:26.713886 containerd[1577]: 2025-10-29 00:41:26.561 [INFO][4149] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:41:26.713886 containerd[1577]: 2025-10-29 00:41:26.562 [INFO][4149] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-n-61970e6314' Oct 29 00:41:26.713886 containerd[1577]: 2025-10-29 00:41:26.597 [INFO][4149] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3dbd2aeae1f4cc9248eed36a68de14495f49555ff4001415888e18e05d21a0a3" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:26.713886 containerd[1577]: 2025-10-29 00:41:26.610 [INFO][4149] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:26.713886 containerd[1577]: 2025-10-29 00:41:26.628 [INFO][4149] ipam/ipam.go 511: Trying affinity for 192.168.56.64/26 host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:26.713886 containerd[1577]: 2025-10-29 00:41:26.634 [INFO][4149] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.64/26 host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:26.713886 containerd[1577]: 2025-10-29 00:41:26.637 [INFO][4149] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.64/26 host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:26.713886 containerd[1577]: 2025-10-29 00:41:26.637 [INFO][4149] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.56.64/26 handle="k8s-pod-network.3dbd2aeae1f4cc9248eed36a68de14495f49555ff4001415888e18e05d21a0a3" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:26.713886 containerd[1577]: 2025-10-29 00:41:26.640 [INFO][4149] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3dbd2aeae1f4cc9248eed36a68de14495f49555ff4001415888e18e05d21a0a3 Oct 29 00:41:26.713886 containerd[1577]: 2025-10-29 00:41:26.648 [INFO][4149] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.56.64/26 handle="k8s-pod-network.3dbd2aeae1f4cc9248eed36a68de14495f49555ff4001415888e18e05d21a0a3" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:26.713886 containerd[1577]: 2025-10-29 00:41:26.658 [INFO][4149] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.56.68/26] block=192.168.56.64/26 handle="k8s-pod-network.3dbd2aeae1f4cc9248eed36a68de14495f49555ff4001415888e18e05d21a0a3" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:26.713886 containerd[1577]: 2025-10-29 00:41:26.658 [INFO][4149] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.68/26] handle="k8s-pod-network.3dbd2aeae1f4cc9248eed36a68de14495f49555ff4001415888e18e05d21a0a3" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:26.713886 containerd[1577]: 2025-10-29 00:41:26.658 [INFO][4149] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:41:26.713886 containerd[1577]: 2025-10-29 00:41:26.658 [INFO][4149] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.56.68/26] IPv6=[] ContainerID="3dbd2aeae1f4cc9248eed36a68de14495f49555ff4001415888e18e05d21a0a3" HandleID="k8s-pod-network.3dbd2aeae1f4cc9248eed36a68de14495f49555ff4001415888e18e05d21a0a3" Workload="ci--4487.0.0--n--61970e6314-k8s-goldmane--7c778bb748--bdqhh-eth0" Oct 29 00:41:26.714875 containerd[1577]: 2025-10-29 00:41:26.667 [INFO][4106] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3dbd2aeae1f4cc9248eed36a68de14495f49555ff4001415888e18e05d21a0a3" Namespace="calico-system" Pod="goldmane-7c778bb748-bdqhh" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-goldmane--7c778bb748--bdqhh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--61970e6314-k8s-goldmane--7c778bb748--bdqhh-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"1db4eddb-85a3-4ff2-842c-fb7c92440b55", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 41, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-61970e6314", ContainerID:"", Pod:"goldmane-7c778bb748-bdqhh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.56.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibc90d009ab0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:41:26.714875 containerd[1577]: 2025-10-29 00:41:26.668 [INFO][4106] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.68/32] ContainerID="3dbd2aeae1f4cc9248eed36a68de14495f49555ff4001415888e18e05d21a0a3" Namespace="calico-system" Pod="goldmane-7c778bb748-bdqhh" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-goldmane--7c778bb748--bdqhh-eth0" Oct 29 00:41:26.714875 containerd[1577]: 2025-10-29 00:41:26.668 [INFO][4106] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc90d009ab0 ContainerID="3dbd2aeae1f4cc9248eed36a68de14495f49555ff4001415888e18e05d21a0a3" Namespace="calico-system" Pod="goldmane-7c778bb748-bdqhh" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-goldmane--7c778bb748--bdqhh-eth0" Oct 29 00:41:26.714875 containerd[1577]: 2025-10-29 00:41:26.676 [INFO][4106] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3dbd2aeae1f4cc9248eed36a68de14495f49555ff4001415888e18e05d21a0a3" Namespace="calico-system" Pod="goldmane-7c778bb748-bdqhh" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-goldmane--7c778bb748--bdqhh-eth0" Oct 29 00:41:26.714875 containerd[1577]: 2025-10-29 00:41:26.677 [INFO][4106] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3dbd2aeae1f4cc9248eed36a68de14495f49555ff4001415888e18e05d21a0a3" Namespace="calico-system" Pod="goldmane-7c778bb748-bdqhh" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-goldmane--7c778bb748--bdqhh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--61970e6314-k8s-goldmane--7c778bb748--bdqhh-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"1db4eddb-85a3-4ff2-842c-fb7c92440b55", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 41, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-61970e6314", ContainerID:"3dbd2aeae1f4cc9248eed36a68de14495f49555ff4001415888e18e05d21a0a3", Pod:"goldmane-7c778bb748-bdqhh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.56.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibc90d009ab0", MAC:"0e:91:bf:be:83:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:41:26.714875 containerd[1577]: 2025-10-29 00:41:26.704 [INFO][4106] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3dbd2aeae1f4cc9248eed36a68de14495f49555ff4001415888e18e05d21a0a3" Namespace="calico-system" Pod="goldmane-7c778bb748-bdqhh" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-goldmane--7c778bb748--bdqhh-eth0" Oct 29 00:41:26.726730 containerd[1577]: time="2025-10-29T00:41:26.726579486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zrzhm,Uid:2087d0f1-acdc-4652-a29c-35bb7514dcab,Namespace:kube-system,Attempt:0,} returns sandbox id \"d00550c0d86e24c288c3d0734916a650160461dadedb0d6dc9f78e17a536a1e6\"" Oct 29 00:41:26.728029 kubelet[2754]: E1029 00:41:26.727842 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:26.745125 containerd[1577]: time="2025-10-29T00:41:26.744798211Z" level=info msg="CreateContainer within sandbox \"d00550c0d86e24c288c3d0734916a650160461dadedb0d6dc9f78e17a536a1e6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 00:41:26.754821 systemd[1]: Started cri-containerd-a847a9fe83b33e341e72617dba10d48e76fe8d93784cee320c3f9bf962360d33.scope - libcontainer container a847a9fe83b33e341e72617dba10d48e76fe8d93784cee320c3f9bf962360d33. Oct 29 00:41:26.763743 containerd[1577]: time="2025-10-29T00:41:26.763678057Z" level=info msg="connecting to shim 3dbd2aeae1f4cc9248eed36a68de14495f49555ff4001415888e18e05d21a0a3" address="unix:///run/containerd/s/9102877cea12d2de4aaaba7d09bde8405af01b83ee99cb8028a9781d4d6cf26c" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:41:26.768676 containerd[1577]: time="2025-10-29T00:41:26.768627924Z" level=info msg="Container 32532675772ba164abc61d56f06e59208507a1e8d69724e9455b93ae71c462b6: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:41:26.788418 containerd[1577]: time="2025-10-29T00:41:26.787734061Z" level=info msg="CreateContainer within sandbox \"d00550c0d86e24c288c3d0734916a650160461dadedb0d6dc9f78e17a536a1e6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"32532675772ba164abc61d56f06e59208507a1e8d69724e9455b93ae71c462b6\"" Oct 29 00:41:26.788665 containerd[1577]: time="2025-10-29T00:41:26.788601073Z" level=info msg="StartContainer for \"32532675772ba164abc61d56f06e59208507a1e8d69724e9455b93ae71c462b6\"" Oct 29 00:41:26.805814 containerd[1577]: time="2025-10-29T00:41:26.805722378Z" level=info msg="connecting to shim 32532675772ba164abc61d56f06e59208507a1e8d69724e9455b93ae71c462b6" address="unix:///run/containerd/s/40b0353ae0ef2bd285891c0e5ecc1d47f85d97ff6394b60ef9149f045118a952" protocol=ttrpc version=3 Oct 29 00:41:26.816608 systemd[1]: Started cri-containerd-3dbd2aeae1f4cc9248eed36a68de14495f49555ff4001415888e18e05d21a0a3.scope - libcontainer container 3dbd2aeae1f4cc9248eed36a68de14495f49555ff4001415888e18e05d21a0a3. Oct 29 00:41:26.832638 systemd[1]: Started cri-containerd-32532675772ba164abc61d56f06e59208507a1e8d69724e9455b93ae71c462b6.scope - libcontainer container 32532675772ba164abc61d56f06e59208507a1e8d69724e9455b93ae71c462b6. Oct 29 00:41:26.884255 containerd[1577]: time="2025-10-29T00:41:26.884213129Z" level=info msg="StartContainer for \"32532675772ba164abc61d56f06e59208507a1e8d69724e9455b93ae71c462b6\" returns successfully" Oct 29 00:41:26.893090 containerd[1577]: time="2025-10-29T00:41:26.893051610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8f59fd667-w8gqk,Uid:8c11e02f-6f5f-41e7-8866-65b058b61720,Namespace:calico-system,Attempt:0,} returns sandbox id \"a847a9fe83b33e341e72617dba10d48e76fe8d93784cee320c3f9bf962360d33\"" Oct 29 00:41:26.896215 containerd[1577]: time="2025-10-29T00:41:26.896180358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 00:41:26.953360 containerd[1577]: time="2025-10-29T00:41:26.953142826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-bdqhh,Uid:1db4eddb-85a3-4ff2-842c-fb7c92440b55,Namespace:calico-system,Attempt:0,} returns sandbox id \"3dbd2aeae1f4cc9248eed36a68de14495f49555ff4001415888e18e05d21a0a3\"" Oct 29 00:41:27.122779 containerd[1577]: time="2025-10-29T00:41:27.122724645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64df77bb46-p5v5r,Uid:973f8417-d690-432c-be64-a1fb3fd7b7ed,Namespace:calico-apiserver,Attempt:0,}" Oct 29 00:41:27.144112 containerd[1577]: time="2025-10-29T00:41:27.144058920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64df77bb46-w7m6v,Uid:b74f2d9f-7641-4b72-a20e-ab5d1471a2c4,Namespace:calico-apiserver,Attempt:0,}" Oct 29 00:41:27.225429 containerd[1577]: time="2025-10-29T00:41:27.225252595Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:41:27.226799 containerd[1577]: time="2025-10-29T00:41:27.226645244Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 00:41:27.227063 containerd[1577]: time="2025-10-29T00:41:27.226841029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 00:41:27.227743 kubelet[2754]: E1029 00:41:27.227253 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:41:27.227743 kubelet[2754]: E1029 00:41:27.227306 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:41:27.229362 kubelet[2754]: E1029 00:41:27.228360 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-8f59fd667-w8gqk_calico-system(8c11e02f-6f5f-41e7-8866-65b058b61720): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 00:41:27.229490 containerd[1577]: time="2025-10-29T00:41:27.228738755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 00:41:27.230031 kubelet[2754]: E1029 00:41:27.229871 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8f59fd667-w8gqk" podUID="8c11e02f-6f5f-41e7-8866-65b058b61720" Oct 29 00:41:27.355313 systemd-networkd[1487]: cali56e39758836: Link UP Oct 29 00:41:27.355952 systemd-networkd[1487]: cali56e39758836: Gained carrier Oct 29 00:41:27.385802 containerd[1577]: 2025-10-29 00:41:27.184 [INFO][4365] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 00:41:27.385802 containerd[1577]: 2025-10-29 00:41:27.209 [INFO][4365] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--n--61970e6314-k8s-calico--apiserver--64df77bb46--p5v5r-eth0 calico-apiserver-64df77bb46- calico-apiserver 973f8417-d690-432c-be64-a1fb3fd7b7ed 847 0 2025-10-29 00:40:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64df77bb46 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4487.0.0-n-61970e6314 calico-apiserver-64df77bb46-p5v5r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali56e39758836 [] [] }} ContainerID="ca092bcecf593316b3bb99c41e1f4f9f63b120782f79d1a92272780b808b95f4" Namespace="calico-apiserver" Pod="calico-apiserver-64df77bb46-p5v5r" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-calico--apiserver--64df77bb46--p5v5r-" Oct 29 00:41:27.385802 containerd[1577]: 2025-10-29 00:41:27.210 [INFO][4365] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ca092bcecf593316b3bb99c41e1f4f9f63b120782f79d1a92272780b808b95f4" Namespace="calico-apiserver" Pod="calico-apiserver-64df77bb46-p5v5r" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-calico--apiserver--64df77bb46--p5v5r-eth0" Oct 29 00:41:27.385802 containerd[1577]: 2025-10-29 00:41:27.273 [INFO][4388] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca092bcecf593316b3bb99c41e1f4f9f63b120782f79d1a92272780b808b95f4" HandleID="k8s-pod-network.ca092bcecf593316b3bb99c41e1f4f9f63b120782f79d1a92272780b808b95f4" Workload="ci--4487.0.0--n--61970e6314-k8s-calico--apiserver--64df77bb46--p5v5r-eth0" Oct 29 00:41:27.385802 containerd[1577]: 2025-10-29 00:41:27.274 [INFO][4388] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ca092bcecf593316b3bb99c41e1f4f9f63b120782f79d1a92272780b808b95f4" HandleID="k8s-pod-network.ca092bcecf593316b3bb99c41e1f4f9f63b120782f79d1a92272780b808b95f4" Workload="ci--4487.0.0--n--61970e6314-k8s-calico--apiserver--64df77bb46--p5v5r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f950), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4487.0.0-n-61970e6314", "pod":"calico-apiserver-64df77bb46-p5v5r", "timestamp":"2025-10-29 00:41:27.273609425 +0000 UTC"}, Hostname:"ci-4487.0.0-n-61970e6314", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:41:27.385802 containerd[1577]: 2025-10-29 00:41:27.274 [INFO][4388] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:41:27.385802 containerd[1577]: 2025-10-29 00:41:27.275 [INFO][4388] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:41:27.385802 containerd[1577]: 2025-10-29 00:41:27.275 [INFO][4388] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-n-61970e6314' Oct 29 00:41:27.385802 containerd[1577]: 2025-10-29 00:41:27.292 [INFO][4388] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ca092bcecf593316b3bb99c41e1f4f9f63b120782f79d1a92272780b808b95f4" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:27.385802 containerd[1577]: 2025-10-29 00:41:27.305 [INFO][4388] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:27.385802 containerd[1577]: 2025-10-29 00:41:27.311 [INFO][4388] ipam/ipam.go 511: Trying affinity for 192.168.56.64/26 host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:27.385802 containerd[1577]: 2025-10-29 00:41:27.314 [INFO][4388] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.64/26 host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:27.385802 containerd[1577]: 2025-10-29 00:41:27.318 [INFO][4388] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.64/26 host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:27.385802 containerd[1577]: 2025-10-29 00:41:27.318 [INFO][4388] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.56.64/26 handle="k8s-pod-network.ca092bcecf593316b3bb99c41e1f4f9f63b120782f79d1a92272780b808b95f4" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:27.385802 containerd[1577]: 2025-10-29 00:41:27.323 [INFO][4388] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ca092bcecf593316b3bb99c41e1f4f9f63b120782f79d1a92272780b808b95f4 Oct 29 00:41:27.385802 containerd[1577]: 2025-10-29 00:41:27.330 [INFO][4388] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.56.64/26 handle="k8s-pod-network.ca092bcecf593316b3bb99c41e1f4f9f63b120782f79d1a92272780b808b95f4" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:27.385802 containerd[1577]: 2025-10-29 00:41:27.339 [INFO][4388] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.56.69/26] block=192.168.56.64/26 handle="k8s-pod-network.ca092bcecf593316b3bb99c41e1f4f9f63b120782f79d1a92272780b808b95f4" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:27.385802 containerd[1577]: 2025-10-29 00:41:27.339 [INFO][4388] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.69/26] handle="k8s-pod-network.ca092bcecf593316b3bb99c41e1f4f9f63b120782f79d1a92272780b808b95f4" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:27.385802 containerd[1577]: 2025-10-29 00:41:27.340 [INFO][4388] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:41:27.385802 containerd[1577]: 2025-10-29 00:41:27.340 [INFO][4388] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.56.69/26] IPv6=[] ContainerID="ca092bcecf593316b3bb99c41e1f4f9f63b120782f79d1a92272780b808b95f4" HandleID="k8s-pod-network.ca092bcecf593316b3bb99c41e1f4f9f63b120782f79d1a92272780b808b95f4" Workload="ci--4487.0.0--n--61970e6314-k8s-calico--apiserver--64df77bb46--p5v5r-eth0" Oct 29 00:41:27.387332 containerd[1577]: 2025-10-29 00:41:27.346 [INFO][4365] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ca092bcecf593316b3bb99c41e1f4f9f63b120782f79d1a92272780b808b95f4" Namespace="calico-apiserver" Pod="calico-apiserver-64df77bb46-p5v5r" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-calico--apiserver--64df77bb46--p5v5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--61970e6314-k8s-calico--apiserver--64df77bb46--p5v5r-eth0", GenerateName:"calico-apiserver-64df77bb46-", Namespace:"calico-apiserver", SelfLink:"", UID:"973f8417-d690-432c-be64-a1fb3fd7b7ed", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 40, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64df77bb46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-61970e6314", ContainerID:"", Pod:"calico-apiserver-64df77bb46-p5v5r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali56e39758836", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:41:27.387332 containerd[1577]: 2025-10-29 00:41:27.347 [INFO][4365] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.69/32] ContainerID="ca092bcecf593316b3bb99c41e1f4f9f63b120782f79d1a92272780b808b95f4" Namespace="calico-apiserver" Pod="calico-apiserver-64df77bb46-p5v5r" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-calico--apiserver--64df77bb46--p5v5r-eth0" Oct 29 00:41:27.387332 containerd[1577]: 2025-10-29 00:41:27.347 [INFO][4365] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56e39758836 ContainerID="ca092bcecf593316b3bb99c41e1f4f9f63b120782f79d1a92272780b808b95f4" Namespace="calico-apiserver" Pod="calico-apiserver-64df77bb46-p5v5r" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-calico--apiserver--64df77bb46--p5v5r-eth0" Oct 29 00:41:27.387332 containerd[1577]: 2025-10-29 00:41:27.356 [INFO][4365] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca092bcecf593316b3bb99c41e1f4f9f63b120782f79d1a92272780b808b95f4" Namespace="calico-apiserver" Pod="calico-apiserver-64df77bb46-p5v5r" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-calico--apiserver--64df77bb46--p5v5r-eth0" Oct 29 00:41:27.387332 containerd[1577]: 2025-10-29 00:41:27.358 [INFO][4365] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ca092bcecf593316b3bb99c41e1f4f9f63b120782f79d1a92272780b808b95f4" Namespace="calico-apiserver" Pod="calico-apiserver-64df77bb46-p5v5r" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-calico--apiserver--64df77bb46--p5v5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--61970e6314-k8s-calico--apiserver--64df77bb46--p5v5r-eth0", GenerateName:"calico-apiserver-64df77bb46-", Namespace:"calico-apiserver", SelfLink:"", UID:"973f8417-d690-432c-be64-a1fb3fd7b7ed", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 40, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64df77bb46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-61970e6314", ContainerID:"ca092bcecf593316b3bb99c41e1f4f9f63b120782f79d1a92272780b808b95f4", Pod:"calico-apiserver-64df77bb46-p5v5r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali56e39758836", MAC:"4e:b9:21:80:99:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:41:27.387332 containerd[1577]: 2025-10-29 00:41:27.380 [INFO][4365] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ca092bcecf593316b3bb99c41e1f4f9f63b120782f79d1a92272780b808b95f4" Namespace="calico-apiserver" Pod="calico-apiserver-64df77bb46-p5v5r" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-calico--apiserver--64df77bb46--p5v5r-eth0" Oct 29 00:41:27.441819 containerd[1577]: time="2025-10-29T00:41:27.441763147Z" level=info msg="connecting to shim ca092bcecf593316b3bb99c41e1f4f9f63b120782f79d1a92272780b808b95f4" address="unix:///run/containerd/s/965eb53d20bee9fbb0d013555739ab60fca752007cf3d4f1c9ffa2372bcd8ae4" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:41:27.466362 kubelet[2754]: E1029 00:41:27.466292 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8f59fd667-w8gqk" podUID="8c11e02f-6f5f-41e7-8866-65b058b61720" Oct 29 00:41:27.481707 systemd-networkd[1487]: cali095230774e8: Link UP Oct 29 00:41:27.487627 systemd-networkd[1487]: cali095230774e8: Gained carrier Oct 29 00:41:27.526368 kubelet[2754]: E1029 00:41:27.525678 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:27.555515 containerd[1577]: 2025-10-29 00:41:27.265 [INFO][4378] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 00:41:27.555515 containerd[1577]: 2025-10-29 00:41:27.285 [INFO][4378] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--n--61970e6314-k8s-calico--apiserver--64df77bb46--w7m6v-eth0 calico-apiserver-64df77bb46- calico-apiserver b74f2d9f-7641-4b72-a20e-ab5d1471a2c4 844 0 2025-10-29 00:40:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64df77bb46 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4487.0.0-n-61970e6314 calico-apiserver-64df77bb46-w7m6v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali095230774e8 [] [] }} ContainerID="8a79a5ab10b3edfe7ca2b8cbe0abefc7997974ed1641e8019181b394501d045e" Namespace="calico-apiserver" Pod="calico-apiserver-64df77bb46-w7m6v" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-calico--apiserver--64df77bb46--w7m6v-" Oct 29 00:41:27.555515 containerd[1577]: 2025-10-29 00:41:27.285 [INFO][4378] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a79a5ab10b3edfe7ca2b8cbe0abefc7997974ed1641e8019181b394501d045e" Namespace="calico-apiserver" Pod="calico-apiserver-64df77bb46-w7m6v" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-calico--apiserver--64df77bb46--w7m6v-eth0" Oct 29 00:41:27.555515 containerd[1577]: 2025-10-29 00:41:27.345 [INFO][4398] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a79a5ab10b3edfe7ca2b8cbe0abefc7997974ed1641e8019181b394501d045e" HandleID="k8s-pod-network.8a79a5ab10b3edfe7ca2b8cbe0abefc7997974ed1641e8019181b394501d045e" Workload="ci--4487.0.0--n--61970e6314-k8s-calico--apiserver--64df77bb46--w7m6v-eth0" Oct 29 00:41:27.555515 containerd[1577]: 2025-10-29 00:41:27.345 [INFO][4398] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8a79a5ab10b3edfe7ca2b8cbe0abefc7997974ed1641e8019181b394501d045e" HandleID="k8s-pod-network.8a79a5ab10b3edfe7ca2b8cbe0abefc7997974ed1641e8019181b394501d045e" Workload="ci--4487.0.0--n--61970e6314-k8s-calico--apiserver--64df77bb46--w7m6v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f870), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4487.0.0-n-61970e6314", "pod":"calico-apiserver-64df77bb46-w7m6v", "timestamp":"2025-10-29 00:41:27.345006853 +0000 UTC"}, Hostname:"ci-4487.0.0-n-61970e6314", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:41:27.555515 containerd[1577]: 2025-10-29 00:41:27.345 [INFO][4398] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:41:27.555515 containerd[1577]: 2025-10-29 00:41:27.345 [INFO][4398] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:41:27.555515 containerd[1577]: 2025-10-29 00:41:27.345 [INFO][4398] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-n-61970e6314' Oct 29 00:41:27.555515 containerd[1577]: 2025-10-29 00:41:27.392 [INFO][4398] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a79a5ab10b3edfe7ca2b8cbe0abefc7997974ed1641e8019181b394501d045e" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:27.555515 containerd[1577]: 2025-10-29 00:41:27.407 [INFO][4398] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:27.555515 containerd[1577]: 2025-10-29 00:41:27.413 [INFO][4398] ipam/ipam.go 511: Trying affinity for 192.168.56.64/26 host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:27.555515 containerd[1577]: 2025-10-29 00:41:27.416 [INFO][4398] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.64/26 host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:27.555515 containerd[1577]: 2025-10-29 00:41:27.421 [INFO][4398] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.64/26 host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:27.555515 containerd[1577]: 2025-10-29 00:41:27.421 [INFO][4398] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.56.64/26 handle="k8s-pod-network.8a79a5ab10b3edfe7ca2b8cbe0abefc7997974ed1641e8019181b394501d045e" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:27.555515 containerd[1577]: 2025-10-29 00:41:27.428 [INFO][4398] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8a79a5ab10b3edfe7ca2b8cbe0abefc7997974ed1641e8019181b394501d045e Oct 29 00:41:27.555515 containerd[1577]: 2025-10-29 00:41:27.433 [INFO][4398] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.56.64/26 handle="k8s-pod-network.8a79a5ab10b3edfe7ca2b8cbe0abefc7997974ed1641e8019181b394501d045e" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:27.555515 containerd[1577]: 2025-10-29 00:41:27.446 [INFO][4398] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.56.70/26] block=192.168.56.64/26 handle="k8s-pod-network.8a79a5ab10b3edfe7ca2b8cbe0abefc7997974ed1641e8019181b394501d045e" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:27.555515 containerd[1577]: 2025-10-29 00:41:27.446 [INFO][4398] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.70/26] handle="k8s-pod-network.8a79a5ab10b3edfe7ca2b8cbe0abefc7997974ed1641e8019181b394501d045e" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:27.555515 containerd[1577]: 2025-10-29 00:41:27.446 [INFO][4398] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:41:27.555515 containerd[1577]: 2025-10-29 00:41:27.446 [INFO][4398] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.56.70/26] IPv6=[] ContainerID="8a79a5ab10b3edfe7ca2b8cbe0abefc7997974ed1641e8019181b394501d045e" HandleID="k8s-pod-network.8a79a5ab10b3edfe7ca2b8cbe0abefc7997974ed1641e8019181b394501d045e" Workload="ci--4487.0.0--n--61970e6314-k8s-calico--apiserver--64df77bb46--w7m6v-eth0" Oct 29 00:41:27.556560 containerd[1577]: 2025-10-29 00:41:27.466 [INFO][4378] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a79a5ab10b3edfe7ca2b8cbe0abefc7997974ed1641e8019181b394501d045e" Namespace="calico-apiserver" Pod="calico-apiserver-64df77bb46-w7m6v" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-calico--apiserver--64df77bb46--w7m6v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--61970e6314-k8s-calico--apiserver--64df77bb46--w7m6v-eth0", GenerateName:"calico-apiserver-64df77bb46-", Namespace:"calico-apiserver", SelfLink:"", UID:"b74f2d9f-7641-4b72-a20e-ab5d1471a2c4", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 40, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64df77bb46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-61970e6314", ContainerID:"", Pod:"calico-apiserver-64df77bb46-w7m6v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali095230774e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:41:27.556560 containerd[1577]: 2025-10-29 00:41:27.468 [INFO][4378] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.70/32] ContainerID="8a79a5ab10b3edfe7ca2b8cbe0abefc7997974ed1641e8019181b394501d045e" Namespace="calico-apiserver" Pod="calico-apiserver-64df77bb46-w7m6v" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-calico--apiserver--64df77bb46--w7m6v-eth0" Oct 29 00:41:27.556560 containerd[1577]: 2025-10-29 00:41:27.468 [INFO][4378] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali095230774e8 ContainerID="8a79a5ab10b3edfe7ca2b8cbe0abefc7997974ed1641e8019181b394501d045e" Namespace="calico-apiserver" Pod="calico-apiserver-64df77bb46-w7m6v" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-calico--apiserver--64df77bb46--w7m6v-eth0" Oct 29 00:41:27.556560 containerd[1577]: 2025-10-29 00:41:27.486 [INFO][4378] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a79a5ab10b3edfe7ca2b8cbe0abefc7997974ed1641e8019181b394501d045e" Namespace="calico-apiserver" Pod="calico-apiserver-64df77bb46-w7m6v" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-calico--apiserver--64df77bb46--w7m6v-eth0" Oct 29 00:41:27.556560 containerd[1577]: 2025-10-29 00:41:27.491 [INFO][4378] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a79a5ab10b3edfe7ca2b8cbe0abefc7997974ed1641e8019181b394501d045e" Namespace="calico-apiserver" Pod="calico-apiserver-64df77bb46-w7m6v" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-calico--apiserver--64df77bb46--w7m6v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--61970e6314-k8s-calico--apiserver--64df77bb46--w7m6v-eth0", GenerateName:"calico-apiserver-64df77bb46-", Namespace:"calico-apiserver", SelfLink:"", UID:"b74f2d9f-7641-4b72-a20e-ab5d1471a2c4", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 40, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64df77bb46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-61970e6314", ContainerID:"8a79a5ab10b3edfe7ca2b8cbe0abefc7997974ed1641e8019181b394501d045e", Pod:"calico-apiserver-64df77bb46-w7m6v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali095230774e8", MAC:"9a:60:07:1b:f2:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:41:27.556560 containerd[1577]: 2025-10-29 00:41:27.519 [INFO][4378] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a79a5ab10b3edfe7ca2b8cbe0abefc7997974ed1641e8019181b394501d045e" Namespace="calico-apiserver" Pod="calico-apiserver-64df77bb46-w7m6v" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-calico--apiserver--64df77bb46--w7m6v-eth0" Oct 29 00:41:27.583128 kubelet[2754]: I1029 00:41:27.582764 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zrzhm" podStartSLOduration=39.5826762 podStartE2EDuration="39.5826762s" podCreationTimestamp="2025-10-29 00:40:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:41:27.552879452 +0000 UTC m=+46.614502222" watchObservedRunningTime="2025-10-29 00:41:27.5826762 +0000 UTC m=+46.644298972" Oct 29 00:41:27.594741 systemd[1]: Started cri-containerd-ca092bcecf593316b3bb99c41e1f4f9f63b120782f79d1a92272780b808b95f4.scope - libcontainer container ca092bcecf593316b3bb99c41e1f4f9f63b120782f79d1a92272780b808b95f4. Oct 29 00:41:27.615693 containerd[1577]: time="2025-10-29T00:41:27.615622031Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:41:27.616693 systemd-networkd[1487]: cali3be66c6acc2: Gained IPv6LL Oct 29 00:41:27.617011 systemd-networkd[1487]: cali2b793dc2a14: Gained IPv6LL Oct 29 00:41:27.619176 containerd[1577]: time="2025-10-29T00:41:27.619090138Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 00:41:27.619176 containerd[1577]: time="2025-10-29T00:41:27.619128761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 00:41:27.619536 kubelet[2754]: E1029 00:41:27.619500 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:41:27.619725 kubelet[2754]: E1029 00:41:27.619685 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:41:27.619952 kubelet[2754]: E1029 00:41:27.619933 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-bdqhh_calico-system(1db4eddb-85a3-4ff2-842c-fb7c92440b55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 00:41:27.620091 kubelet[2754]: E1029 00:41:27.620000 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bdqhh" podUID="1db4eddb-85a3-4ff2-842c-fb7c92440b55" Oct 29 00:41:27.628948 containerd[1577]: time="2025-10-29T00:41:27.628902676Z" level=info msg="connecting to shim 8a79a5ab10b3edfe7ca2b8cbe0abefc7997974ed1641e8019181b394501d045e" address="unix:///run/containerd/s/dc4e0f550ec046468ffc047a58d1f6eddeda547cf106da776d6290cd03f73b4b" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:41:27.683825 systemd[1]: Started cri-containerd-8a79a5ab10b3edfe7ca2b8cbe0abefc7997974ed1641e8019181b394501d045e.scope - libcontainer container 8a79a5ab10b3edfe7ca2b8cbe0abefc7997974ed1641e8019181b394501d045e. Oct 29 00:41:27.741813 containerd[1577]: time="2025-10-29T00:41:27.741772897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64df77bb46-p5v5r,Uid:973f8417-d690-432c-be64-a1fb3fd7b7ed,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ca092bcecf593316b3bb99c41e1f4f9f63b120782f79d1a92272780b808b95f4\"" Oct 29 00:41:27.745162 containerd[1577]: time="2025-10-29T00:41:27.745111069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:41:27.809294 containerd[1577]: time="2025-10-29T00:41:27.809129213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64df77bb46-w7m6v,Uid:b74f2d9f-7641-4b72-a20e-ab5d1471a2c4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8a79a5ab10b3edfe7ca2b8cbe0abefc7997974ed1641e8019181b394501d045e\"" Oct 29 00:41:28.099987 containerd[1577]: time="2025-10-29T00:41:28.099915180Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:41:28.100704 containerd[1577]: time="2025-10-29T00:41:28.100664224Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:41:28.100823 containerd[1577]: time="2025-10-29T00:41:28.100675242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:41:28.101018 kubelet[2754]: E1029 00:41:28.100982 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:41:28.101085 kubelet[2754]: E1029 00:41:28.101031 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:41:28.101244 kubelet[2754]: E1029 00:41:28.101208 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-64df77bb46-p5v5r_calico-apiserver(973f8417-d690-432c-be64-a1fb3fd7b7ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:41:28.101316 kubelet[2754]: E1029 00:41:28.101251 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64df77bb46-p5v5r" podUID="973f8417-d690-432c-be64-a1fb3fd7b7ed" Oct 29 00:41:28.103352 containerd[1577]: time="2025-10-29T00:41:28.103289635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:41:28.385620 systemd-networkd[1487]: calibc90d009ab0: Gained IPv6LL Oct 29 00:41:28.467381 containerd[1577]: time="2025-10-29T00:41:28.467315093Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:41:28.468233 containerd[1577]: time="2025-10-29T00:41:28.468182224Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:41:28.468335 containerd[1577]: time="2025-10-29T00:41:28.468270497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:41:28.468777 kubelet[2754]: E1029 00:41:28.468666 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:41:28.468777 kubelet[2754]: E1029 00:41:28.468730 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:41:28.469425 kubelet[2754]: E1029 00:41:28.469262 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-64df77bb46-w7m6v_calico-apiserver(b74f2d9f-7641-4b72-a20e-ab5d1471a2c4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:41:28.469425 kubelet[2754]: E1029 00:41:28.469332 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64df77bb46-w7m6v" podUID="b74f2d9f-7641-4b72-a20e-ab5d1471a2c4" Oct 29 00:41:28.529956 kubelet[2754]: E1029 00:41:28.529820 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64df77bb46-w7m6v" podUID="b74f2d9f-7641-4b72-a20e-ab5d1471a2c4" Oct 29 00:41:28.531445 kubelet[2754]: E1029 00:41:28.531365 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:28.532956 kubelet[2754]: E1029 00:41:28.532249 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bdqhh" podUID="1db4eddb-85a3-4ff2-842c-fb7c92440b55" Oct 29 00:41:28.532956 kubelet[2754]: E1029 00:41:28.532330 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64df77bb46-p5v5r" podUID="973f8417-d690-432c-be64-a1fb3fd7b7ed" Oct 29 00:41:28.532956 kubelet[2754]: E1029 00:41:28.532532 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8f59fd667-w8gqk" podUID="8c11e02f-6f5f-41e7-8866-65b058b61720" Oct 29 00:41:28.960814 systemd-networkd[1487]: cali095230774e8: Gained IPv6LL Oct 29 00:41:29.088569 systemd-networkd[1487]: cali56e39758836: Gained IPv6LL Oct 29 00:41:29.108688 kubelet[2754]: I1029 00:41:29.108576 2754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 00:41:29.109053 kubelet[2754]: E1029 00:41:29.109030 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:29.118768 kubelet[2754]: E1029 00:41:29.118697 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:29.119773 containerd[1577]: time="2025-10-29T00:41:29.119728845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-276cs,Uid:49e1d0d2-4b83-435e-8d3b-7c6d253bcfc8,Namespace:kube-system,Attempt:0,}" Oct 29 00:41:29.300926 systemd-networkd[1487]: cali484f32f5a28: Link UP Oct 29 00:41:29.302151 systemd-networkd[1487]: cali484f32f5a28: Gained carrier Oct 29 00:41:29.324622 containerd[1577]: 2025-10-29 00:41:29.165 [INFO][4549] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 00:41:29.324622 containerd[1577]: 2025-10-29 00:41:29.194 [INFO][4549] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--n--61970e6314-k8s-coredns--66bc5c9577--276cs-eth0 coredns-66bc5c9577- kube-system 49e1d0d2-4b83-435e-8d3b-7c6d253bcfc8 843 0 2025-10-29 00:40:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4487.0.0-n-61970e6314 coredns-66bc5c9577-276cs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali484f32f5a28 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="9878779cf74436998f9457f033b44d362320933cb8c461f7234240328acd8e3e" Namespace="kube-system" Pod="coredns-66bc5c9577-276cs" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-coredns--66bc5c9577--276cs-" Oct 29 00:41:29.324622 containerd[1577]: 2025-10-29 00:41:29.194 [INFO][4549] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9878779cf74436998f9457f033b44d362320933cb8c461f7234240328acd8e3e" Namespace="kube-system" Pod="coredns-66bc5c9577-276cs" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-coredns--66bc5c9577--276cs-eth0" Oct 29 00:41:29.324622 containerd[1577]: 2025-10-29 00:41:29.242 [INFO][4561] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9878779cf74436998f9457f033b44d362320933cb8c461f7234240328acd8e3e" HandleID="k8s-pod-network.9878779cf74436998f9457f033b44d362320933cb8c461f7234240328acd8e3e" Workload="ci--4487.0.0--n--61970e6314-k8s-coredns--66bc5c9577--276cs-eth0" Oct 29 00:41:29.324622 containerd[1577]: 2025-10-29 00:41:29.242 [INFO][4561] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9878779cf74436998f9457f033b44d362320933cb8c461f7234240328acd8e3e" HandleID="k8s-pod-network.9878779cf74436998f9457f033b44d362320933cb8c461f7234240328acd8e3e" Workload="ci--4487.0.0--n--61970e6314-k8s-coredns--66bc5c9577--276cs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002acc90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4487.0.0-n-61970e6314", "pod":"coredns-66bc5c9577-276cs", "timestamp":"2025-10-29 00:41:29.24243562 +0000 UTC"}, Hostname:"ci-4487.0.0-n-61970e6314", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:41:29.324622 containerd[1577]: 2025-10-29 00:41:29.242 [INFO][4561] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:41:29.324622 containerd[1577]: 2025-10-29 00:41:29.242 [INFO][4561] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:41:29.324622 containerd[1577]: 2025-10-29 00:41:29.242 [INFO][4561] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-n-61970e6314' Oct 29 00:41:29.324622 containerd[1577]: 2025-10-29 00:41:29.253 [INFO][4561] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9878779cf74436998f9457f033b44d362320933cb8c461f7234240328acd8e3e" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:29.324622 containerd[1577]: 2025-10-29 00:41:29.258 [INFO][4561] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:29.324622 containerd[1577]: 2025-10-29 00:41:29.264 [INFO][4561] ipam/ipam.go 511: Trying affinity for 192.168.56.64/26 host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:29.324622 containerd[1577]: 2025-10-29 00:41:29.267 [INFO][4561] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.64/26 host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:29.324622 containerd[1577]: 2025-10-29 00:41:29.270 [INFO][4561] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.64/26 host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:29.324622 containerd[1577]: 2025-10-29 00:41:29.271 [INFO][4561] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.56.64/26 handle="k8s-pod-network.9878779cf74436998f9457f033b44d362320933cb8c461f7234240328acd8e3e" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:29.324622 containerd[1577]: 2025-10-29 00:41:29.273 [INFO][4561] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9878779cf74436998f9457f033b44d362320933cb8c461f7234240328acd8e3e Oct 29 00:41:29.324622 containerd[1577]: 2025-10-29 00:41:29.282 [INFO][4561] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.56.64/26 handle="k8s-pod-network.9878779cf74436998f9457f033b44d362320933cb8c461f7234240328acd8e3e" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:29.324622 containerd[1577]: 2025-10-29 00:41:29.290 [INFO][4561] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.56.71/26] block=192.168.56.64/26 handle="k8s-pod-network.9878779cf74436998f9457f033b44d362320933cb8c461f7234240328acd8e3e" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:29.324622 containerd[1577]: 2025-10-29 00:41:29.290 [INFO][4561] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.71/26] handle="k8s-pod-network.9878779cf74436998f9457f033b44d362320933cb8c461f7234240328acd8e3e" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:29.324622 containerd[1577]: 2025-10-29 00:41:29.291 [INFO][4561] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:41:29.324622 containerd[1577]: 2025-10-29 00:41:29.291 [INFO][4561] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.56.71/26] IPv6=[] ContainerID="9878779cf74436998f9457f033b44d362320933cb8c461f7234240328acd8e3e" HandleID="k8s-pod-network.9878779cf74436998f9457f033b44d362320933cb8c461f7234240328acd8e3e" Workload="ci--4487.0.0--n--61970e6314-k8s-coredns--66bc5c9577--276cs-eth0" Oct 29 00:41:29.325618 containerd[1577]: 2025-10-29 00:41:29.296 [INFO][4549] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9878779cf74436998f9457f033b44d362320933cb8c461f7234240328acd8e3e" Namespace="kube-system" Pod="coredns-66bc5c9577-276cs" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-coredns--66bc5c9577--276cs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--61970e6314-k8s-coredns--66bc5c9577--276cs-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"49e1d0d2-4b83-435e-8d3b-7c6d253bcfc8", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 40, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-61970e6314", ContainerID:"", Pod:"coredns-66bc5c9577-276cs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali484f32f5a28", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:41:29.325618 containerd[1577]: 2025-10-29 00:41:29.296 [INFO][4549] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.71/32] ContainerID="9878779cf74436998f9457f033b44d362320933cb8c461f7234240328acd8e3e" Namespace="kube-system" Pod="coredns-66bc5c9577-276cs" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-coredns--66bc5c9577--276cs-eth0" Oct 29 00:41:29.325618 containerd[1577]: 2025-10-29 00:41:29.296 [INFO][4549] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali484f32f5a28 ContainerID="9878779cf74436998f9457f033b44d362320933cb8c461f7234240328acd8e3e" Namespace="kube-system" Pod="coredns-66bc5c9577-276cs" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-coredns--66bc5c9577--276cs-eth0" Oct 29 00:41:29.325618 containerd[1577]: 2025-10-29 00:41:29.301 [INFO][4549] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9878779cf74436998f9457f033b44d362320933cb8c461f7234240328acd8e3e" Namespace="kube-system" Pod="coredns-66bc5c9577-276cs" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-coredns--66bc5c9577--276cs-eth0" Oct 29 00:41:29.325618 containerd[1577]: 2025-10-29 00:41:29.302 [INFO][4549] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9878779cf74436998f9457f033b44d362320933cb8c461f7234240328acd8e3e" Namespace="kube-system" Pod="coredns-66bc5c9577-276cs" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-coredns--66bc5c9577--276cs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--61970e6314-k8s-coredns--66bc5c9577--276cs-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"49e1d0d2-4b83-435e-8d3b-7c6d253bcfc8", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 40, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-61970e6314", ContainerID:"9878779cf74436998f9457f033b44d362320933cb8c461f7234240328acd8e3e", Pod:"coredns-66bc5c9577-276cs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali484f32f5a28", MAC:"ea:62:20:55:3e:01", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:41:29.325845 containerd[1577]: 2025-10-29 00:41:29.319 [INFO][4549] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9878779cf74436998f9457f033b44d362320933cb8c461f7234240328acd8e3e" Namespace="kube-system" Pod="coredns-66bc5c9577-276cs" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-coredns--66bc5c9577--276cs-eth0" Oct 29 00:41:29.356371 containerd[1577]: time="2025-10-29T00:41:29.355880188Z" level=info msg="connecting to shim 9878779cf74436998f9457f033b44d362320933cb8c461f7234240328acd8e3e" address="unix:///run/containerd/s/26aa890e8644ce14236b892b6cec54c802c140c94e06c0ef343aaac3d8a8bf1a" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:41:29.421817 systemd[1]: Started cri-containerd-9878779cf74436998f9457f033b44d362320933cb8c461f7234240328acd8e3e.scope - libcontainer container 9878779cf74436998f9457f033b44d362320933cb8c461f7234240328acd8e3e. Oct 29 00:41:29.500378 containerd[1577]: time="2025-10-29T00:41:29.500312088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-276cs,Uid:49e1d0d2-4b83-435e-8d3b-7c6d253bcfc8,Namespace:kube-system,Attempt:0,} returns sandbox id \"9878779cf74436998f9457f033b44d362320933cb8c461f7234240328acd8e3e\"" Oct 29 00:41:29.503332 kubelet[2754]: E1029 00:41:29.503269 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:29.526121 containerd[1577]: time="2025-10-29T00:41:29.526066785Z" level=info msg="CreateContainer within sandbox \"9878779cf74436998f9457f033b44d362320933cb8c461f7234240328acd8e3e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 00:41:29.540410 containerd[1577]: time="2025-10-29T00:41:29.539706105Z" level=info msg="Container e607288132efd5130733ee6f849fdd1ab515a91e61fbd30a6b91d7731f147a38: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:41:29.542384 kubelet[2754]: E1029 00:41:29.542308 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:29.543747 kubelet[2754]: E1029 00:41:29.543709 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:29.546633 kubelet[2754]: E1029 00:41:29.545908 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64df77bb46-w7m6v" podUID="b74f2d9f-7641-4b72-a20e-ab5d1471a2c4" Oct 29 00:41:29.546633 kubelet[2754]: E1029 00:41:29.546491 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64df77bb46-p5v5r" podUID="973f8417-d690-432c-be64-a1fb3fd7b7ed" Oct 29 00:41:29.564275 containerd[1577]: time="2025-10-29T00:41:29.564126684Z" level=info msg="CreateContainer within sandbox \"9878779cf74436998f9457f033b44d362320933cb8c461f7234240328acd8e3e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e607288132efd5130733ee6f849fdd1ab515a91e61fbd30a6b91d7731f147a38\"" Oct 29 00:41:29.569555 containerd[1577]: time="2025-10-29T00:41:29.568380923Z" level=info msg="StartContainer for \"e607288132efd5130733ee6f849fdd1ab515a91e61fbd30a6b91d7731f147a38\"" Oct 29 00:41:29.570092 containerd[1577]: time="2025-10-29T00:41:29.569981960Z" level=info msg="connecting to shim e607288132efd5130733ee6f849fdd1ab515a91e61fbd30a6b91d7731f147a38" address="unix:///run/containerd/s/26aa890e8644ce14236b892b6cec54c802c140c94e06c0ef343aaac3d8a8bf1a" protocol=ttrpc version=3 Oct 29 00:41:29.612107 systemd[1]: Started cri-containerd-e607288132efd5130733ee6f849fdd1ab515a91e61fbd30a6b91d7731f147a38.scope - libcontainer container e607288132efd5130733ee6f849fdd1ab515a91e61fbd30a6b91d7731f147a38. Oct 29 00:41:29.684599 containerd[1577]: time="2025-10-29T00:41:29.684545588Z" level=info msg="StartContainer for \"e607288132efd5130733ee6f849fdd1ab515a91e61fbd30a6b91d7731f147a38\" returns successfully" Oct 29 00:41:30.325091 systemd-networkd[1487]: vxlan.calico: Link UP Oct 29 00:41:30.325100 systemd-networkd[1487]: vxlan.calico: Gained carrier Oct 29 00:41:30.498404 systemd-networkd[1487]: cali484f32f5a28: Gained IPv6LL Oct 29 00:41:30.547490 kubelet[2754]: E1029 00:41:30.546858 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:30.586231 kubelet[2754]: I1029 00:41:30.585850 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-276cs" podStartSLOduration=42.585830129 podStartE2EDuration="42.585830129s" podCreationTimestamp="2025-10-29 00:40:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:41:30.56426112 +0000 UTC m=+49.625883891" watchObservedRunningTime="2025-10-29 00:41:30.585830129 +0000 UTC m=+49.647452900" Oct 29 00:41:31.121282 containerd[1577]: time="2025-10-29T00:41:31.121244759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mlqqf,Uid:a7b26047-6837-40e0-9ca7-4fee0f45a405,Namespace:calico-system,Attempt:0,}" Oct 29 00:41:31.309912 systemd-networkd[1487]: cali817ac4dcc44: Link UP Oct 29 00:41:31.312584 systemd-networkd[1487]: cali817ac4dcc44: Gained carrier Oct 29 00:41:31.355122 containerd[1577]: 2025-10-29 00:41:31.177 [INFO][4786] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--n--61970e6314-k8s-csi--node--driver--mlqqf-eth0 csi-node-driver- calico-system a7b26047-6837-40e0-9ca7-4fee0f45a405 729 0 2025-10-29 00:41:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4487.0.0-n-61970e6314 csi-node-driver-mlqqf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali817ac4dcc44 [] [] }} ContainerID="3034b93c935b1c44fffeb6237fb6a35b921d2b0262a7050b8d576d367a9ceb6a" Namespace="calico-system" Pod="csi-node-driver-mlqqf" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-csi--node--driver--mlqqf-" Oct 29 00:41:31.355122 containerd[1577]: 2025-10-29 00:41:31.177 [INFO][4786] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3034b93c935b1c44fffeb6237fb6a35b921d2b0262a7050b8d576d367a9ceb6a" Namespace="calico-system" Pod="csi-node-driver-mlqqf" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-csi--node--driver--mlqqf-eth0" Oct 29 00:41:31.355122 containerd[1577]: 2025-10-29 00:41:31.224 [INFO][4798] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3034b93c935b1c44fffeb6237fb6a35b921d2b0262a7050b8d576d367a9ceb6a" HandleID="k8s-pod-network.3034b93c935b1c44fffeb6237fb6a35b921d2b0262a7050b8d576d367a9ceb6a" Workload="ci--4487.0.0--n--61970e6314-k8s-csi--node--driver--mlqqf-eth0" Oct 29 00:41:31.355122 containerd[1577]: 2025-10-29 00:41:31.225 [INFO][4798] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3034b93c935b1c44fffeb6237fb6a35b921d2b0262a7050b8d576d367a9ceb6a" HandleID="k8s-pod-network.3034b93c935b1c44fffeb6237fb6a35b921d2b0262a7050b8d576d367a9ceb6a" Workload="ci--4487.0.0--n--61970e6314-k8s-csi--node--driver--mlqqf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.0-n-61970e6314", "pod":"csi-node-driver-mlqqf", "timestamp":"2025-10-29 00:41:31.224656228 +0000 UTC"}, Hostname:"ci-4487.0.0-n-61970e6314", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:41:31.355122 containerd[1577]: 2025-10-29 00:41:31.225 [INFO][4798] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:41:31.355122 containerd[1577]: 2025-10-29 00:41:31.225 [INFO][4798] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:41:31.355122 containerd[1577]: 2025-10-29 00:41:31.226 [INFO][4798] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-n-61970e6314' Oct 29 00:41:31.355122 containerd[1577]: 2025-10-29 00:41:31.241 [INFO][4798] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3034b93c935b1c44fffeb6237fb6a35b921d2b0262a7050b8d576d367a9ceb6a" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:31.355122 containerd[1577]: 2025-10-29 00:41:31.250 [INFO][4798] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:31.355122 containerd[1577]: 2025-10-29 00:41:31.266 [INFO][4798] ipam/ipam.go 511: Trying affinity for 192.168.56.64/26 host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:31.355122 containerd[1577]: 2025-10-29 00:41:31.270 [INFO][4798] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.64/26 host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:31.355122 containerd[1577]: 2025-10-29 00:41:31.274 [INFO][4798] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.64/26 host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:31.355122 containerd[1577]: 2025-10-29 00:41:31.275 [INFO][4798] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.56.64/26 handle="k8s-pod-network.3034b93c935b1c44fffeb6237fb6a35b921d2b0262a7050b8d576d367a9ceb6a" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:31.355122 containerd[1577]: 2025-10-29 00:41:31.277 [INFO][4798] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3034b93c935b1c44fffeb6237fb6a35b921d2b0262a7050b8d576d367a9ceb6a Oct 29 00:41:31.355122 containerd[1577]: 2025-10-29 00:41:31.288 [INFO][4798] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.56.64/26 handle="k8s-pod-network.3034b93c935b1c44fffeb6237fb6a35b921d2b0262a7050b8d576d367a9ceb6a" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:31.355122 containerd[1577]: 2025-10-29 00:41:31.297 [INFO][4798] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.56.72/26] block=192.168.56.64/26 handle="k8s-pod-network.3034b93c935b1c44fffeb6237fb6a35b921d2b0262a7050b8d576d367a9ceb6a" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:31.355122 containerd[1577]: 2025-10-29 00:41:31.297 [INFO][4798] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.72/26] handle="k8s-pod-network.3034b93c935b1c44fffeb6237fb6a35b921d2b0262a7050b8d576d367a9ceb6a" host="ci-4487.0.0-n-61970e6314" Oct 29 00:41:31.355122 containerd[1577]: 2025-10-29 00:41:31.297 [INFO][4798] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:41:31.355122 containerd[1577]: 2025-10-29 00:41:31.297 [INFO][4798] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.56.72/26] IPv6=[] ContainerID="3034b93c935b1c44fffeb6237fb6a35b921d2b0262a7050b8d576d367a9ceb6a" HandleID="k8s-pod-network.3034b93c935b1c44fffeb6237fb6a35b921d2b0262a7050b8d576d367a9ceb6a" Workload="ci--4487.0.0--n--61970e6314-k8s-csi--node--driver--mlqqf-eth0" Oct 29 00:41:31.360306 containerd[1577]: 2025-10-29 00:41:31.304 [INFO][4786] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3034b93c935b1c44fffeb6237fb6a35b921d2b0262a7050b8d576d367a9ceb6a" Namespace="calico-system" Pod="csi-node-driver-mlqqf" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-csi--node--driver--mlqqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--61970e6314-k8s-csi--node--driver--mlqqf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a7b26047-6837-40e0-9ca7-4fee0f45a405", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 41, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-61970e6314", ContainerID:"", Pod:"csi-node-driver-mlqqf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.56.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali817ac4dcc44", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:41:31.360306 containerd[1577]: 2025-10-29 00:41:31.304 [INFO][4786] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.72/32] ContainerID="3034b93c935b1c44fffeb6237fb6a35b921d2b0262a7050b8d576d367a9ceb6a" Namespace="calico-system" Pod="csi-node-driver-mlqqf" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-csi--node--driver--mlqqf-eth0" Oct 29 00:41:31.360306 containerd[1577]: 2025-10-29 00:41:31.304 [INFO][4786] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali817ac4dcc44 ContainerID="3034b93c935b1c44fffeb6237fb6a35b921d2b0262a7050b8d576d367a9ceb6a" Namespace="calico-system" Pod="csi-node-driver-mlqqf" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-csi--node--driver--mlqqf-eth0" Oct 29 00:41:31.360306 containerd[1577]: 2025-10-29 00:41:31.311 [INFO][4786] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3034b93c935b1c44fffeb6237fb6a35b921d2b0262a7050b8d576d367a9ceb6a" Namespace="calico-system" Pod="csi-node-driver-mlqqf" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-csi--node--driver--mlqqf-eth0" Oct 29 00:41:31.360306 containerd[1577]: 2025-10-29 00:41:31.311 [INFO][4786] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3034b93c935b1c44fffeb6237fb6a35b921d2b0262a7050b8d576d367a9ceb6a" Namespace="calico-system" Pod="csi-node-driver-mlqqf" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-csi--node--driver--mlqqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--61970e6314-k8s-csi--node--driver--mlqqf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a7b26047-6837-40e0-9ca7-4fee0f45a405", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 41, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-61970e6314", ContainerID:"3034b93c935b1c44fffeb6237fb6a35b921d2b0262a7050b8d576d367a9ceb6a", Pod:"csi-node-driver-mlqqf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.56.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali817ac4dcc44", MAC:"4a:14:73:fc:15:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:41:31.360306 containerd[1577]: 2025-10-29 00:41:31.343 [INFO][4786] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3034b93c935b1c44fffeb6237fb6a35b921d2b0262a7050b8d576d367a9ceb6a" Namespace="calico-system" Pod="csi-node-driver-mlqqf" WorkloadEndpoint="ci--4487.0.0--n--61970e6314-k8s-csi--node--driver--mlqqf-eth0" Oct 29 00:41:31.414510 containerd[1577]: time="2025-10-29T00:41:31.413623414Z" level=info msg="connecting to shim 3034b93c935b1c44fffeb6237fb6a35b921d2b0262a7050b8d576d367a9ceb6a" address="unix:///run/containerd/s/254a9385fbcbe88f34ba8046c81dc2df8e85e8db99eafc31828fb2bdef3f8cc7" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:41:31.472806 systemd[1]: Started cri-containerd-3034b93c935b1c44fffeb6237fb6a35b921d2b0262a7050b8d576d367a9ceb6a.scope - libcontainer container 3034b93c935b1c44fffeb6237fb6a35b921d2b0262a7050b8d576d367a9ceb6a. Oct 29 00:41:31.551752 kubelet[2754]: E1029 00:41:31.551714 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:31.570566 containerd[1577]: time="2025-10-29T00:41:31.570512999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mlqqf,Uid:a7b26047-6837-40e0-9ca7-4fee0f45a405,Namespace:calico-system,Attempt:0,} returns sandbox id \"3034b93c935b1c44fffeb6237fb6a35b921d2b0262a7050b8d576d367a9ceb6a\"" Oct 29 00:41:31.575370 containerd[1577]: time="2025-10-29T00:41:31.574913010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 00:41:31.840737 systemd-networkd[1487]: vxlan.calico: Gained IPv6LL Oct 29 00:41:31.930857 containerd[1577]: time="2025-10-29T00:41:31.930571043Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:41:31.931277 containerd[1577]: time="2025-10-29T00:41:31.931245429Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 00:41:31.931378 containerd[1577]: time="2025-10-29T00:41:31.931327062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 00:41:31.932170 kubelet[2754]: E1029 00:41:31.931568 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:41:31.932170 kubelet[2754]: E1029 00:41:31.931630 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:41:31.932170 kubelet[2754]: E1029 00:41:31.931724 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-mlqqf_calico-system(a7b26047-6837-40e0-9ca7-4fee0f45a405): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 00:41:31.933168 containerd[1577]: time="2025-10-29T00:41:31.933054772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 00:41:32.338626 containerd[1577]: time="2025-10-29T00:41:32.338562949Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:41:32.339266 containerd[1577]: time="2025-10-29T00:41:32.339232399Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 00:41:32.339331 containerd[1577]: time="2025-10-29T00:41:32.339318879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 00:41:32.339790 kubelet[2754]: E1029 00:41:32.339550 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:41:32.339790 kubelet[2754]: E1029 00:41:32.339609 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:41:32.339790 kubelet[2754]: E1029 00:41:32.339698 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-mlqqf_calico-system(a7b26047-6837-40e0-9ca7-4fee0f45a405): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 00:41:32.339994 kubelet[2754]: E1029 00:41:32.339739 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mlqqf" podUID="a7b26047-6837-40e0-9ca7-4fee0f45a405" Oct 29 00:41:32.556737 kubelet[2754]: E1029 00:41:32.556694 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:32.558812 kubelet[2754]: E1029 00:41:32.558757 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mlqqf" podUID="a7b26047-6837-40e0-9ca7-4fee0f45a405" Oct 29 00:41:32.864642 systemd-networkd[1487]: cali817ac4dcc44: Gained IPv6LL Oct 29 00:41:33.561390 kubelet[2754]: E1029 00:41:33.560640 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mlqqf" podUID="a7b26047-6837-40e0-9ca7-4fee0f45a405" Oct 29 00:41:38.118376 containerd[1577]: time="2025-10-29T00:41:38.118252689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 00:41:38.440400 containerd[1577]: time="2025-10-29T00:41:38.440112904Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:41:38.441260 containerd[1577]: time="2025-10-29T00:41:38.441119989Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 00:41:38.441260 containerd[1577]: time="2025-10-29T00:41:38.441138105Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 00:41:38.441478 kubelet[2754]: E1029 00:41:38.441430 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:41:38.443007 kubelet[2754]: E1029 00:41:38.441479 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:41:38.443007 kubelet[2754]: E1029 00:41:38.441569 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7d8c4588fb-cgxbb_calico-system(fbcce175-ed1f-4ce1-b176-b5da4c1b969c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 00:41:38.443622 containerd[1577]: time="2025-10-29T00:41:38.443572216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 00:41:38.914776 containerd[1577]: time="2025-10-29T00:41:38.914727127Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:41:38.915489 containerd[1577]: time="2025-10-29T00:41:38.915455208Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 00:41:38.916050 kubelet[2754]: E1029 00:41:38.915940 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:41:38.916326 kubelet[2754]: E1029 00:41:38.916185 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:41:38.916620 kubelet[2754]: E1029 00:41:38.916502 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7d8c4588fb-cgxbb_calico-system(fbcce175-ed1f-4ce1-b176-b5da4c1b969c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 00:41:38.916809 kubelet[2754]: E1029 00:41:38.916600 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d8c4588fb-cgxbb" podUID="fbcce175-ed1f-4ce1-b176-b5da4c1b969c" Oct 29 00:41:38.922276 containerd[1577]: time="2025-10-29T00:41:38.915534170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 00:41:40.121439 containerd[1577]: time="2025-10-29T00:41:40.120545824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 00:41:40.485231 containerd[1577]: time="2025-10-29T00:41:40.484987755Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:41:40.485961 containerd[1577]: time="2025-10-29T00:41:40.485902765Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 00:41:40.486052 containerd[1577]: time="2025-10-29T00:41:40.486004015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 00:41:40.486209 kubelet[2754]: E1029 00:41:40.486168 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:41:40.486777 kubelet[2754]: E1029 00:41:40.486220 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:41:40.486777 kubelet[2754]: E1029 00:41:40.486490 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-bdqhh_calico-system(1db4eddb-85a3-4ff2-842c-fb7c92440b55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 00:41:40.486777 kubelet[2754]: E1029 00:41:40.486637 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bdqhh" podUID="1db4eddb-85a3-4ff2-842c-fb7c92440b55" Oct 29 00:41:42.118129 containerd[1577]: time="2025-10-29T00:41:42.118047063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:41:42.459304 containerd[1577]: time="2025-10-29T00:41:42.458727802Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:41:42.459844 containerd[1577]: time="2025-10-29T00:41:42.459795530Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:41:42.459939 containerd[1577]: time="2025-10-29T00:41:42.459918665Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:41:42.460621 kubelet[2754]: E1029 00:41:42.460303 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:41:42.460621 kubelet[2754]: E1029 00:41:42.460426 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:41:42.461125 containerd[1577]: time="2025-10-29T00:41:42.460933251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 00:41:42.461700 kubelet[2754]: E1029 00:41:42.460656 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-64df77bb46-w7m6v_calico-apiserver(b74f2d9f-7641-4b72-a20e-ab5d1471a2c4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:41:42.461700 kubelet[2754]: E1029 00:41:42.461309 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64df77bb46-w7m6v" podUID="b74f2d9f-7641-4b72-a20e-ab5d1471a2c4" Oct 29 00:41:42.810715 containerd[1577]: time="2025-10-29T00:41:42.810535109Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:41:42.811745 containerd[1577]: time="2025-10-29T00:41:42.811682279Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 00:41:42.811861 containerd[1577]: time="2025-10-29T00:41:42.811719278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 00:41:42.812124 kubelet[2754]: E1029 00:41:42.812078 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:41:42.812368 kubelet[2754]: E1029 00:41:42.812143 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:41:42.812368 kubelet[2754]: E1029 00:41:42.812257 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-8f59fd667-w8gqk_calico-system(8c11e02f-6f5f-41e7-8866-65b058b61720): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 00:41:42.812368 kubelet[2754]: E1029 00:41:42.812304 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8f59fd667-w8gqk" podUID="8c11e02f-6f5f-41e7-8866-65b058b61720" Oct 29 00:41:45.119096 containerd[1577]: time="2025-10-29T00:41:45.118948219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:41:45.452458 containerd[1577]: time="2025-10-29T00:41:45.452218354Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:41:45.453516 containerd[1577]: time="2025-10-29T00:41:45.453390588Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:41:45.453516 containerd[1577]: time="2025-10-29T00:41:45.453445429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:41:45.453770 kubelet[2754]: E1029 00:41:45.453717 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:41:45.454101 kubelet[2754]: E1029 00:41:45.453787 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:41:45.454101 kubelet[2754]: E1029 00:41:45.453879 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-64df77bb46-p5v5r_calico-apiserver(973f8417-d690-432c-be64-a1fb3fd7b7ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:41:45.454101 kubelet[2754]: E1029 00:41:45.453913 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64df77bb46-p5v5r" podUID="973f8417-d690-432c-be64-a1fb3fd7b7ed" Oct 29 00:41:47.118032 containerd[1577]: time="2025-10-29T00:41:47.117921820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 00:41:47.491706 containerd[1577]: time="2025-10-29T00:41:47.491223713Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:41:47.492485 containerd[1577]: time="2025-10-29T00:41:47.492431735Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 00:41:47.492663 containerd[1577]: time="2025-10-29T00:41:47.492454971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 00:41:47.493434 kubelet[2754]: E1029 00:41:47.492648 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:41:47.493434 kubelet[2754]: E1029 00:41:47.492729 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:41:47.493434 kubelet[2754]: E1029 00:41:47.492816 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-mlqqf_calico-system(a7b26047-6837-40e0-9ca7-4fee0f45a405): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 00:41:47.495044 containerd[1577]: time="2025-10-29T00:41:47.494971548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 00:41:47.817564 containerd[1577]: time="2025-10-29T00:41:47.817386958Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:41:47.818712 containerd[1577]: time="2025-10-29T00:41:47.818666651Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 00:41:47.818996 containerd[1577]: time="2025-10-29T00:41:47.818776973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 00:41:47.819122 kubelet[2754]: E1029 00:41:47.818954 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:41:47.819422 kubelet[2754]: E1029 00:41:47.819120 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:41:47.819422 kubelet[2754]: E1029 00:41:47.819240 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-mlqqf_calico-system(a7b26047-6837-40e0-9ca7-4fee0f45a405): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 00:41:47.819422 kubelet[2754]: E1029 00:41:47.819321 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mlqqf" podUID="a7b26047-6837-40e0-9ca7-4fee0f45a405" Oct 29 00:41:51.125269 kubelet[2754]: E1029 00:41:51.125170 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bdqhh" podUID="1db4eddb-85a3-4ff2-842c-fb7c92440b55" Oct 29 00:41:51.130372 kubelet[2754]: E1029 00:41:51.129551 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d8c4588fb-cgxbb" podUID="fbcce175-ed1f-4ce1-b176-b5da4c1b969c" Oct 29 00:41:52.982539 systemd[1]: Started sshd@7-64.23.202.85:22-139.178.89.65:40556.service - OpenSSH per-connection server daemon (139.178.89.65:40556). Oct 29 00:41:53.115602 sshd[4895]: Accepted publickey for core from 139.178.89.65 port 40556 ssh2: RSA SHA256:smEgY84YGI1KYS6vItm95Ji1wpEjlZC/2OCB3dco40g Oct 29 00:41:53.121997 sshd-session[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:41:53.129459 systemd-logind[1563]: New session 8 of user core. Oct 29 00:41:53.134657 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 29 00:41:53.583368 containerd[1577]: time="2025-10-29T00:41:53.583261475Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3f744bd0cdbf9c7b32ef74461d09bcaf90e60108e405cc543a315eea8fe4330d\" id:\"4655cc48a298609075331309d533a30f649af1c0aaebb232690ef9cbdcd0d299\" pid:4920 exited_at:{seconds:1761698513 nanos:582823944}" Oct 29 00:41:53.591081 kubelet[2754]: E1029 00:41:53.590051 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:53.622371 sshd[4898]: Connection closed by 139.178.89.65 port 40556 Oct 29 00:41:53.622582 sshd-session[4895]: pam_unix(sshd:session): session closed for user core Oct 29 00:41:53.633439 systemd[1]: sshd@7-64.23.202.85:22-139.178.89.65:40556.service: Deactivated successfully. Oct 29 00:41:53.633507 systemd-logind[1563]: Session 8 logged out. Waiting for processes to exit. Oct 29 00:41:53.637277 systemd[1]: session-8.scope: Deactivated successfully. Oct 29 00:41:53.641608 systemd-logind[1563]: Removed session 8. Oct 29 00:41:56.117752 kubelet[2754]: E1029 00:41:56.117609 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8f59fd667-w8gqk" podUID="8c11e02f-6f5f-41e7-8866-65b058b61720" Oct 29 00:41:57.119762 kubelet[2754]: E1029 00:41:57.119696 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64df77bb46-w7m6v" podUID="b74f2d9f-7641-4b72-a20e-ab5d1471a2c4" Oct 29 00:41:58.116838 kubelet[2754]: E1029 00:41:58.116657 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:41:58.635097 systemd[1]: Started sshd@8-64.23.202.85:22-139.178.89.65:50394.service - OpenSSH per-connection server daemon (139.178.89.65:50394). Oct 29 00:41:58.716063 sshd[4940]: Accepted publickey for core from 139.178.89.65 port 50394 ssh2: RSA SHA256:smEgY84YGI1KYS6vItm95Ji1wpEjlZC/2OCB3dco40g Oct 29 00:41:58.717818 sshd-session[4940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:41:58.724414 systemd-logind[1563]: New session 9 of user core. Oct 29 00:41:58.729615 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 29 00:41:58.872014 sshd[4947]: Connection closed by 139.178.89.65 port 50394 Oct 29 00:41:58.872695 sshd-session[4940]: pam_unix(sshd:session): session closed for user core Oct 29 00:41:58.878741 systemd[1]: sshd@8-64.23.202.85:22-139.178.89.65:50394.service: Deactivated successfully. Oct 29 00:41:58.881477 systemd[1]: session-9.scope: Deactivated successfully. Oct 29 00:41:58.882517 systemd-logind[1563]: Session 9 logged out. Waiting for processes to exit. Oct 29 00:41:58.884540 systemd-logind[1563]: Removed session 9. Oct 29 00:42:01.120161 kubelet[2754]: E1029 00:42:01.119913 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64df77bb46-p5v5r" podUID="973f8417-d690-432c-be64-a1fb3fd7b7ed" Oct 29 00:42:01.123521 kubelet[2754]: E1029 00:42:01.122970 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mlqqf" podUID="a7b26047-6837-40e0-9ca7-4fee0f45a405" Oct 29 00:42:01.125262 kubelet[2754]: E1029 00:42:01.125153 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:42:02.118142 containerd[1577]: time="2025-10-29T00:42:02.117996667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 00:42:02.489240 containerd[1577]: time="2025-10-29T00:42:02.489093477Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:02.489912 containerd[1577]: time="2025-10-29T00:42:02.489868290Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 00:42:02.490018 containerd[1577]: time="2025-10-29T00:42:02.489969212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 00:42:02.490258 kubelet[2754]: E1029 00:42:02.490201 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:42:02.490685 kubelet[2754]: E1029 00:42:02.490282 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:42:02.490685 kubelet[2754]: E1029 00:42:02.490434 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-bdqhh_calico-system(1db4eddb-85a3-4ff2-842c-fb7c92440b55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:02.490685 kubelet[2754]: E1029 00:42:02.490474 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bdqhh" podUID="1db4eddb-85a3-4ff2-842c-fb7c92440b55" Oct 29 00:42:03.887709 systemd[1]: Started sshd@9-64.23.202.85:22-139.178.89.65:50398.service - OpenSSH per-connection server daemon (139.178.89.65:50398). Oct 29 00:42:03.956071 sshd[4962]: Accepted publickey for core from 139.178.89.65 port 50398 ssh2: RSA SHA256:smEgY84YGI1KYS6vItm95Ji1wpEjlZC/2OCB3dco40g Oct 29 00:42:03.958108 sshd-session[4962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:03.963962 systemd-logind[1563]: New session 10 of user core. Oct 29 00:42:03.968586 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 29 00:42:04.100205 sshd[4965]: Connection closed by 139.178.89.65 port 50398 Oct 29 00:42:04.101304 sshd-session[4962]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:04.106046 systemd-logind[1563]: Session 10 logged out. Waiting for processes to exit. Oct 29 00:42:04.106579 systemd[1]: sshd@9-64.23.202.85:22-139.178.89.65:50398.service: Deactivated successfully. Oct 29 00:42:04.110623 systemd[1]: session-10.scope: Deactivated successfully. Oct 29 00:42:04.116051 systemd-logind[1563]: Removed session 10. Oct 29 00:42:05.118897 kubelet[2754]: E1029 00:42:05.118768 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:42:05.123337 containerd[1577]: time="2025-10-29T00:42:05.123304152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 00:42:05.455455 containerd[1577]: time="2025-10-29T00:42:05.454707718Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:05.456653 containerd[1577]: time="2025-10-29T00:42:05.456426121Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 00:42:05.456912 containerd[1577]: time="2025-10-29T00:42:05.456635489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 00:42:05.457202 kubelet[2754]: E1029 00:42:05.457151 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:42:05.457273 kubelet[2754]: E1029 00:42:05.457215 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:42:05.457324 kubelet[2754]: E1029 00:42:05.457305 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7d8c4588fb-cgxbb_calico-system(fbcce175-ed1f-4ce1-b176-b5da4c1b969c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:05.459495 containerd[1577]: time="2025-10-29T00:42:05.459468502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 00:42:05.825941 containerd[1577]: time="2025-10-29T00:42:05.825699377Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:05.827310 containerd[1577]: time="2025-10-29T00:42:05.827192399Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 00:42:05.827310 containerd[1577]: time="2025-10-29T00:42:05.827288627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 00:42:05.829743 kubelet[2754]: E1029 00:42:05.829675 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:42:05.830201 kubelet[2754]: E1029 00:42:05.829900 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:42:05.830201 kubelet[2754]: E1029 00:42:05.830044 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7d8c4588fb-cgxbb_calico-system(fbcce175-ed1f-4ce1-b176-b5da4c1b969c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:05.830371 kubelet[2754]: E1029 00:42:05.830290 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d8c4588fb-cgxbb" podUID="fbcce175-ed1f-4ce1-b176-b5da4c1b969c" Oct 29 00:42:07.122699 containerd[1577]: time="2025-10-29T00:42:07.122253552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 00:42:07.474450 containerd[1577]: time="2025-10-29T00:42:07.473983262Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:07.476321 containerd[1577]: time="2025-10-29T00:42:07.476253972Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 00:42:07.476511 containerd[1577]: time="2025-10-29T00:42:07.476394712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 00:42:07.476950 kubelet[2754]: E1029 00:42:07.476630 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:42:07.476950 kubelet[2754]: E1029 00:42:07.476770 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:42:07.476950 kubelet[2754]: E1029 00:42:07.476876 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-8f59fd667-w8gqk_calico-system(8c11e02f-6f5f-41e7-8866-65b058b61720): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:07.476950 kubelet[2754]: E1029 00:42:07.476924 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8f59fd667-w8gqk" podUID="8c11e02f-6f5f-41e7-8866-65b058b61720" Oct 29 00:42:08.117373 kubelet[2754]: E1029 00:42:08.117148 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:42:08.119538 containerd[1577]: time="2025-10-29T00:42:08.119419674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:42:08.455602 containerd[1577]: time="2025-10-29T00:42:08.455298000Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:08.456559 containerd[1577]: time="2025-10-29T00:42:08.456315989Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:42:08.456559 containerd[1577]: time="2025-10-29T00:42:08.456396528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:42:08.456670 kubelet[2754]: E1029 00:42:08.456572 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:42:08.456670 kubelet[2754]: E1029 00:42:08.456632 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:42:08.456813 kubelet[2754]: E1029 00:42:08.456751 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-64df77bb46-w7m6v_calico-apiserver(b74f2d9f-7641-4b72-a20e-ab5d1471a2c4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:08.456813 kubelet[2754]: E1029 00:42:08.456787 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64df77bb46-w7m6v" podUID="b74f2d9f-7641-4b72-a20e-ab5d1471a2c4" Oct 29 00:42:09.122245 systemd[1]: Started sshd@10-64.23.202.85:22-139.178.89.65:51068.service - OpenSSH per-connection server daemon (139.178.89.65:51068). Oct 29 00:42:09.270688 sshd[4978]: Accepted publickey for core from 139.178.89.65 port 51068 ssh2: RSA SHA256:smEgY84YGI1KYS6vItm95Ji1wpEjlZC/2OCB3dco40g Oct 29 00:42:09.273973 sshd-session[4978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:09.280246 systemd-logind[1563]: New session 11 of user core. Oct 29 00:42:09.288660 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 29 00:42:09.536951 sshd[4981]: Connection closed by 139.178.89.65 port 51068 Oct 29 00:42:09.538659 sshd-session[4978]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:09.553795 systemd[1]: sshd@10-64.23.202.85:22-139.178.89.65:51068.service: Deactivated successfully. Oct 29 00:42:09.556389 systemd[1]: session-11.scope: Deactivated successfully. Oct 29 00:42:09.557620 systemd-logind[1563]: Session 11 logged out. Waiting for processes to exit. Oct 29 00:42:09.562423 systemd[1]: Started sshd@11-64.23.202.85:22-139.178.89.65:51084.service - OpenSSH per-connection server daemon (139.178.89.65:51084). Oct 29 00:42:09.563530 systemd-logind[1563]: Removed session 11. Oct 29 00:42:09.646114 sshd[4994]: Accepted publickey for core from 139.178.89.65 port 51084 ssh2: RSA SHA256:smEgY84YGI1KYS6vItm95Ji1wpEjlZC/2OCB3dco40g Oct 29 00:42:09.648056 sshd-session[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:09.657595 systemd-logind[1563]: New session 12 of user core. Oct 29 00:42:09.663691 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 29 00:42:09.884359 sshd[4997]: Connection closed by 139.178.89.65 port 51084 Oct 29 00:42:09.885592 sshd-session[4994]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:09.898523 systemd[1]: sshd@11-64.23.202.85:22-139.178.89.65:51084.service: Deactivated successfully. Oct 29 00:42:09.906759 systemd[1]: session-12.scope: Deactivated successfully. Oct 29 00:42:09.912175 systemd-logind[1563]: Session 12 logged out. Waiting for processes to exit. Oct 29 00:42:09.918508 systemd[1]: Started sshd@12-64.23.202.85:22-139.178.89.65:51088.service - OpenSSH per-connection server daemon (139.178.89.65:51088). Oct 29 00:42:09.925814 systemd-logind[1563]: Removed session 12. Oct 29 00:42:10.013137 sshd[5007]: Accepted publickey for core from 139.178.89.65 port 51088 ssh2: RSA SHA256:smEgY84YGI1KYS6vItm95Ji1wpEjlZC/2OCB3dco40g Oct 29 00:42:10.015432 sshd-session[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:10.022300 systemd-logind[1563]: New session 13 of user core. Oct 29 00:42:10.028679 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 29 00:42:10.186197 sshd[5010]: Connection closed by 139.178.89.65 port 51088 Oct 29 00:42:10.187624 sshd-session[5007]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:10.195784 systemd-logind[1563]: Session 13 logged out. Waiting for processes to exit. Oct 29 00:42:10.196011 systemd[1]: sshd@12-64.23.202.85:22-139.178.89.65:51088.service: Deactivated successfully. Oct 29 00:42:10.198604 systemd[1]: session-13.scope: Deactivated successfully. Oct 29 00:42:10.203421 systemd-logind[1563]: Removed session 13. Oct 29 00:42:12.121020 containerd[1577]: time="2025-10-29T00:42:12.120963901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:42:12.465326 containerd[1577]: time="2025-10-29T00:42:12.465167079Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:12.466171 containerd[1577]: time="2025-10-29T00:42:12.466085711Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:42:12.466350 containerd[1577]: time="2025-10-29T00:42:12.466090109Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:42:12.466702 kubelet[2754]: E1029 00:42:12.466622 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:42:12.466702 kubelet[2754]: E1029 00:42:12.466681 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:42:12.468394 kubelet[2754]: E1029 00:42:12.467240 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-64df77bb46-p5v5r_calico-apiserver(973f8417-d690-432c-be64-a1fb3fd7b7ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:12.468394 kubelet[2754]: E1029 00:42:12.467450 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64df77bb46-p5v5r" podUID="973f8417-d690-432c-be64-a1fb3fd7b7ed" Oct 29 00:42:15.119996 containerd[1577]: time="2025-10-29T00:42:15.119582163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 00:42:15.203739 systemd[1]: Started sshd@13-64.23.202.85:22-139.178.89.65:51090.service - OpenSSH per-connection server daemon (139.178.89.65:51090). Oct 29 00:42:15.272366 sshd[5034]: Accepted publickey for core from 139.178.89.65 port 51090 ssh2: RSA SHA256:smEgY84YGI1KYS6vItm95Ji1wpEjlZC/2OCB3dco40g Oct 29 00:42:15.274310 sshd-session[5034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:15.281475 systemd-logind[1563]: New session 14 of user core. Oct 29 00:42:15.287695 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 29 00:42:15.455418 sshd[5037]: Connection closed by 139.178.89.65 port 51090 Oct 29 00:42:15.456453 sshd-session[5034]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:15.461699 systemd-logind[1563]: Session 14 logged out. Waiting for processes to exit. Oct 29 00:42:15.462337 systemd[1]: sshd@13-64.23.202.85:22-139.178.89.65:51090.service: Deactivated successfully. Oct 29 00:42:15.466910 systemd[1]: session-14.scope: Deactivated successfully. Oct 29 00:42:15.472051 systemd-logind[1563]: Removed session 14. Oct 29 00:42:15.503481 containerd[1577]: time="2025-10-29T00:42:15.503384942Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:15.504485 containerd[1577]: time="2025-10-29T00:42:15.504411059Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 00:42:15.504485 containerd[1577]: time="2025-10-29T00:42:15.504449809Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 00:42:15.504800 kubelet[2754]: E1029 00:42:15.504754 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:42:15.505189 kubelet[2754]: E1029 00:42:15.504824 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:42:15.505756 kubelet[2754]: E1029 00:42:15.504931 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-mlqqf_calico-system(a7b26047-6837-40e0-9ca7-4fee0f45a405): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:15.506728 containerd[1577]: time="2025-10-29T00:42:15.506692428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 00:42:15.805594 containerd[1577]: time="2025-10-29T00:42:15.805431602Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:15.806588 containerd[1577]: time="2025-10-29T00:42:15.806531279Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 00:42:15.806810 containerd[1577]: time="2025-10-29T00:42:15.806563847Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 00:42:15.806870 kubelet[2754]: E1029 00:42:15.806819 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:42:15.806926 kubelet[2754]: E1029 00:42:15.806886 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:42:15.807010 kubelet[2754]: E1029 00:42:15.806982 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-mlqqf_calico-system(a7b26047-6837-40e0-9ca7-4fee0f45a405): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:15.807113 kubelet[2754]: E1029 00:42:15.807032 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mlqqf" podUID="a7b26047-6837-40e0-9ca7-4fee0f45a405" Oct 29 00:42:16.118318 kubelet[2754]: E1029 00:42:16.117957 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bdqhh" podUID="1db4eddb-85a3-4ff2-842c-fb7c92440b55" Oct 29 00:42:18.118714 kubelet[2754]: E1029 00:42:18.118558 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8f59fd667-w8gqk" podUID="8c11e02f-6f5f-41e7-8866-65b058b61720" Oct 29 00:42:19.121289 kubelet[2754]: E1029 00:42:19.121085 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d8c4588fb-cgxbb" podUID="fbcce175-ed1f-4ce1-b176-b5da4c1b969c" Oct 29 00:42:20.470684 systemd[1]: Started sshd@14-64.23.202.85:22-139.178.89.65:55370.service - OpenSSH per-connection server daemon (139.178.89.65:55370). Oct 29 00:42:20.572564 sshd[5051]: Accepted publickey for core from 139.178.89.65 port 55370 ssh2: RSA SHA256:smEgY84YGI1KYS6vItm95Ji1wpEjlZC/2OCB3dco40g Oct 29 00:42:20.574499 sshd-session[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:20.580412 systemd-logind[1563]: New session 15 of user core. Oct 29 00:42:20.585592 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 29 00:42:20.811473 sshd[5054]: Connection closed by 139.178.89.65 port 55370 Oct 29 00:42:20.812592 sshd-session[5051]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:20.820993 systemd[1]: sshd@14-64.23.202.85:22-139.178.89.65:55370.service: Deactivated successfully. Oct 29 00:42:20.827701 systemd[1]: session-15.scope: Deactivated successfully. Oct 29 00:42:20.829829 systemd-logind[1563]: Session 15 logged out. Waiting for processes to exit. Oct 29 00:42:20.832732 systemd-logind[1563]: Removed session 15. Oct 29 00:42:22.121503 kubelet[2754]: E1029 00:42:22.121418 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64df77bb46-w7m6v" podUID="b74f2d9f-7641-4b72-a20e-ab5d1471a2c4" Oct 29 00:42:23.477204 containerd[1577]: time="2025-10-29T00:42:23.477149981Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3f744bd0cdbf9c7b32ef74461d09bcaf90e60108e405cc543a315eea8fe4330d\" id:\"9c2bc318c1c56418a3da27953da7e8dac2ba69f312eeb739e4620a80b38d4d98\" pid:5078 exited_at:{seconds:1761698543 nanos:476830818}" Oct 29 00:42:25.829751 systemd[1]: Started sshd@15-64.23.202.85:22-139.178.89.65:55386.service - OpenSSH per-connection server daemon (139.178.89.65:55386). Oct 29 00:42:25.926540 sshd[5090]: Accepted publickey for core from 139.178.89.65 port 55386 ssh2: RSA SHA256:smEgY84YGI1KYS6vItm95Ji1wpEjlZC/2OCB3dco40g Oct 29 00:42:25.928526 sshd-session[5090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:25.934720 systemd-logind[1563]: New session 16 of user core. Oct 29 00:42:25.938601 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 29 00:42:26.096171 sshd[5093]: Connection closed by 139.178.89.65 port 55386 Oct 29 00:42:26.097970 sshd-session[5090]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:26.104881 systemd[1]: sshd@15-64.23.202.85:22-139.178.89.65:55386.service: Deactivated successfully. Oct 29 00:42:26.107953 systemd[1]: session-16.scope: Deactivated successfully. Oct 29 00:42:26.109677 systemd-logind[1563]: Session 16 logged out. Waiting for processes to exit. Oct 29 00:42:26.111563 systemd-logind[1563]: Removed session 16. Oct 29 00:42:27.119389 kubelet[2754]: E1029 00:42:27.119132 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bdqhh" podUID="1db4eddb-85a3-4ff2-842c-fb7c92440b55" Oct 29 00:42:28.117441 kubelet[2754]: E1029 00:42:28.117083 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64df77bb46-p5v5r" podUID="973f8417-d690-432c-be64-a1fb3fd7b7ed" Oct 29 00:42:29.119630 kubelet[2754]: E1029 00:42:29.118979 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8f59fd667-w8gqk" podUID="8c11e02f-6f5f-41e7-8866-65b058b61720" Oct 29 00:42:29.124901 kubelet[2754]: E1029 00:42:29.124037 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mlqqf" podUID="a7b26047-6837-40e0-9ca7-4fee0f45a405" Oct 29 00:42:31.113708 systemd[1]: Started sshd@16-64.23.202.85:22-139.178.89.65:57558.service - OpenSSH per-connection server daemon (139.178.89.65:57558). Oct 29 00:42:31.194098 sshd[5106]: Accepted publickey for core from 139.178.89.65 port 57558 ssh2: RSA SHA256:smEgY84YGI1KYS6vItm95Ji1wpEjlZC/2OCB3dco40g Oct 29 00:42:31.199145 sshd-session[5106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:31.210374 systemd-logind[1563]: New session 17 of user core. Oct 29 00:42:31.217598 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 29 00:42:31.379443 sshd[5109]: Connection closed by 139.178.89.65 port 57558 Oct 29 00:42:31.380762 sshd-session[5106]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:31.387510 systemd[1]: sshd@16-64.23.202.85:22-139.178.89.65:57558.service: Deactivated successfully. Oct 29 00:42:31.392402 systemd[1]: session-17.scope: Deactivated successfully. Oct 29 00:42:31.394599 systemd-logind[1563]: Session 17 logged out. Waiting for processes to exit. Oct 29 00:42:31.399784 systemd-logind[1563]: Removed session 17. Oct 29 00:42:32.119878 kubelet[2754]: E1029 00:42:32.119808 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d8c4588fb-cgxbb" podUID="fbcce175-ed1f-4ce1-b176-b5da4c1b969c" Oct 29 00:42:36.118139 kubelet[2754]: E1029 00:42:36.118069 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64df77bb46-w7m6v" podUID="b74f2d9f-7641-4b72-a20e-ab5d1471a2c4" Oct 29 00:42:36.398894 systemd[1]: Started sshd@17-64.23.202.85:22-139.178.89.65:54962.service - OpenSSH per-connection server daemon (139.178.89.65:54962). Oct 29 00:42:36.464401 sshd[5121]: Accepted publickey for core from 139.178.89.65 port 54962 ssh2: RSA SHA256:smEgY84YGI1KYS6vItm95Ji1wpEjlZC/2OCB3dco40g Oct 29 00:42:36.465702 sshd-session[5121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:36.471175 systemd-logind[1563]: New session 18 of user core. Oct 29 00:42:36.475588 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 29 00:42:36.615216 sshd[5124]: Connection closed by 139.178.89.65 port 54962 Oct 29 00:42:36.616585 sshd-session[5121]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:36.627282 systemd[1]: sshd@17-64.23.202.85:22-139.178.89.65:54962.service: Deactivated successfully. Oct 29 00:42:36.630160 systemd[1]: session-18.scope: Deactivated successfully. Oct 29 00:42:36.631312 systemd-logind[1563]: Session 18 logged out. Waiting for processes to exit. Oct 29 00:42:36.637048 systemd[1]: Started sshd@18-64.23.202.85:22-139.178.89.65:54968.service - OpenSSH per-connection server daemon (139.178.89.65:54968). Oct 29 00:42:36.638716 systemd-logind[1563]: Removed session 18. Oct 29 00:42:36.708479 sshd[5136]: Accepted publickey for core from 139.178.89.65 port 54968 ssh2: RSA SHA256:smEgY84YGI1KYS6vItm95Ji1wpEjlZC/2OCB3dco40g Oct 29 00:42:36.710116 sshd-session[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:36.718458 systemd-logind[1563]: New session 19 of user core. Oct 29 00:42:36.725606 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 29 00:42:37.033741 sshd[5139]: Connection closed by 139.178.89.65 port 54968 Oct 29 00:42:37.034562 sshd-session[5136]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:37.047874 systemd[1]: sshd@18-64.23.202.85:22-139.178.89.65:54968.service: Deactivated successfully. Oct 29 00:42:37.050382 systemd[1]: session-19.scope: Deactivated successfully. Oct 29 00:42:37.051710 systemd-logind[1563]: Session 19 logged out. Waiting for processes to exit. Oct 29 00:42:37.055984 systemd[1]: Started sshd@19-64.23.202.85:22-139.178.89.65:54974.service - OpenSSH per-connection server daemon (139.178.89.65:54974). Oct 29 00:42:37.057698 systemd-logind[1563]: Removed session 19. Oct 29 00:42:37.116321 kubelet[2754]: E1029 00:42:37.116241 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:42:37.122407 sshd[5149]: Accepted publickey for core from 139.178.89.65 port 54974 ssh2: RSA SHA256:smEgY84YGI1KYS6vItm95Ji1wpEjlZC/2OCB3dco40g Oct 29 00:42:37.123807 sshd-session[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:37.130252 systemd-logind[1563]: New session 20 of user core. Oct 29 00:42:37.134640 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 29 00:42:37.750541 sshd[5152]: Connection closed by 139.178.89.65 port 54974 Oct 29 00:42:37.751691 sshd-session[5149]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:37.765265 systemd[1]: sshd@19-64.23.202.85:22-139.178.89.65:54974.service: Deactivated successfully. Oct 29 00:42:37.768791 systemd[1]: session-20.scope: Deactivated successfully. Oct 29 00:42:37.771015 systemd-logind[1563]: Session 20 logged out. Waiting for processes to exit. Oct 29 00:42:37.777794 systemd[1]: Started sshd@20-64.23.202.85:22-139.178.89.65:54978.service - OpenSSH per-connection server daemon (139.178.89.65:54978). Oct 29 00:42:37.778666 systemd-logind[1563]: Removed session 20. Oct 29 00:42:37.915973 sshd[5167]: Accepted publickey for core from 139.178.89.65 port 54978 ssh2: RSA SHA256:smEgY84YGI1KYS6vItm95Ji1wpEjlZC/2OCB3dco40g Oct 29 00:42:37.918611 sshd-session[5167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:37.931635 systemd-logind[1563]: New session 21 of user core. Oct 29 00:42:37.936601 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 29 00:42:38.117483 kubelet[2754]: E1029 00:42:38.117389 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bdqhh" podUID="1db4eddb-85a3-4ff2-842c-fb7c92440b55" Oct 29 00:42:38.310583 sshd[5170]: Connection closed by 139.178.89.65 port 54978 Oct 29 00:42:38.311364 sshd-session[5167]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:38.325129 systemd[1]: sshd@20-64.23.202.85:22-139.178.89.65:54978.service: Deactivated successfully. Oct 29 00:42:38.329598 systemd[1]: session-21.scope: Deactivated successfully. Oct 29 00:42:38.332198 systemd-logind[1563]: Session 21 logged out. Waiting for processes to exit. Oct 29 00:42:38.339683 systemd[1]: Started sshd@21-64.23.202.85:22-139.178.89.65:54984.service - OpenSSH per-connection server daemon (139.178.89.65:54984). Oct 29 00:42:38.340123 systemd-logind[1563]: Removed session 21. Oct 29 00:42:38.403979 sshd[5181]: Accepted publickey for core from 139.178.89.65 port 54984 ssh2: RSA SHA256:smEgY84YGI1KYS6vItm95Ji1wpEjlZC/2OCB3dco40g Oct 29 00:42:38.406079 sshd-session[5181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:38.411710 systemd-logind[1563]: New session 22 of user core. Oct 29 00:42:38.420615 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 29 00:42:38.568544 sshd[5184]: Connection closed by 139.178.89.65 port 54984 Oct 29 00:42:38.569800 sshd-session[5181]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:38.575921 systemd[1]: sshd@21-64.23.202.85:22-139.178.89.65:54984.service: Deactivated successfully. Oct 29 00:42:38.579162 systemd[1]: session-22.scope: Deactivated successfully. Oct 29 00:42:38.580863 systemd-logind[1563]: Session 22 logged out. Waiting for processes to exit. Oct 29 00:42:38.583936 systemd-logind[1563]: Removed session 22. Oct 29 00:42:40.119206 kubelet[2754]: E1029 00:42:40.118501 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8f59fd667-w8gqk" podUID="8c11e02f-6f5f-41e7-8866-65b058b61720" Oct 29 00:42:40.120442 kubelet[2754]: E1029 00:42:40.120322 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mlqqf" podUID="a7b26047-6837-40e0-9ca7-4fee0f45a405" Oct 29 00:42:43.120714 kubelet[2754]: E1029 00:42:43.120610 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64df77bb46-p5v5r" podUID="973f8417-d690-432c-be64-a1fb3fd7b7ed" Oct 29 00:42:43.586640 systemd[1]: Started sshd@22-64.23.202.85:22-139.178.89.65:55000.service - OpenSSH per-connection server daemon (139.178.89.65:55000). Oct 29 00:42:43.655770 sshd[5200]: Accepted publickey for core from 139.178.89.65 port 55000 ssh2: RSA SHA256:smEgY84YGI1KYS6vItm95Ji1wpEjlZC/2OCB3dco40g Oct 29 00:42:43.657687 sshd-session[5200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:43.663497 systemd-logind[1563]: New session 23 of user core. Oct 29 00:42:43.675611 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 29 00:42:43.816081 sshd[5203]: Connection closed by 139.178.89.65 port 55000 Oct 29 00:42:43.816876 sshd-session[5200]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:43.821980 systemd[1]: sshd@22-64.23.202.85:22-139.178.89.65:55000.service: Deactivated successfully. Oct 29 00:42:43.824816 systemd[1]: session-23.scope: Deactivated successfully. Oct 29 00:42:43.825886 systemd-logind[1563]: Session 23 logged out. Waiting for processes to exit. Oct 29 00:42:43.827331 systemd-logind[1563]: Removed session 23. Oct 29 00:42:44.118851 kubelet[2754]: E1029 00:42:44.118799 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d8c4588fb-cgxbb" podUID="fbcce175-ed1f-4ce1-b176-b5da4c1b969c" Oct 29 00:42:48.835332 systemd[1]: Started sshd@23-64.23.202.85:22-139.178.89.65:43330.service - OpenSSH per-connection server daemon (139.178.89.65:43330). Oct 29 00:42:49.003438 sshd[5217]: Accepted publickey for core from 139.178.89.65 port 43330 ssh2: RSA SHA256:smEgY84YGI1KYS6vItm95Ji1wpEjlZC/2OCB3dco40g Oct 29 00:42:49.006583 sshd-session[5217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:49.012122 systemd-logind[1563]: New session 24 of user core. Oct 29 00:42:49.021580 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 29 00:42:49.416314 sshd[5222]: Connection closed by 139.178.89.65 port 43330 Oct 29 00:42:49.419000 sshd-session[5217]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:49.425660 systemd[1]: sshd@23-64.23.202.85:22-139.178.89.65:43330.service: Deactivated successfully. Oct 29 00:42:49.428805 systemd[1]: session-24.scope: Deactivated successfully. Oct 29 00:42:49.432251 systemd-logind[1563]: Session 24 logged out. Waiting for processes to exit. Oct 29 00:42:49.436045 systemd-logind[1563]: Removed session 24. Oct 29 00:42:50.117697 kubelet[2754]: E1029 00:42:50.117642 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 29 00:42:50.118943 containerd[1577]: time="2025-10-29T00:42:50.118893277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:42:50.463056 containerd[1577]: time="2025-10-29T00:42:50.462791377Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:50.466371 containerd[1577]: time="2025-10-29T00:42:50.463807604Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:42:50.466371 containerd[1577]: time="2025-10-29T00:42:50.463910031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:42:50.466816 kubelet[2754]: E1029 00:42:50.466765 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:42:50.466912 kubelet[2754]: E1029 00:42:50.466828 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:42:50.466943 kubelet[2754]: E1029 00:42:50.466925 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-64df77bb46-w7m6v_calico-apiserver(b74f2d9f-7641-4b72-a20e-ab5d1471a2c4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:50.466984 kubelet[2754]: E1029 00:42:50.466961 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64df77bb46-w7m6v" podUID="b74f2d9f-7641-4b72-a20e-ab5d1471a2c4" Oct 29 00:42:52.118702 containerd[1577]: time="2025-10-29T00:42:52.118657223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 00:42:52.119801 kubelet[2754]: E1029 00:42:52.119062 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mlqqf" podUID="a7b26047-6837-40e0-9ca7-4fee0f45a405" Oct 29 00:42:52.459426 containerd[1577]: time="2025-10-29T00:42:52.459080727Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:52.459861 containerd[1577]: time="2025-10-29T00:42:52.459829324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 00:42:52.459978 containerd[1577]: time="2025-10-29T00:42:52.459866818Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 00:42:52.460315 kubelet[2754]: E1029 00:42:52.460270 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:42:52.460453 kubelet[2754]: E1029 00:42:52.460324 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:42:52.460957 kubelet[2754]: E1029 00:42:52.460921 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-bdqhh_calico-system(1db4eddb-85a3-4ff2-842c-fb7c92440b55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:52.461126 kubelet[2754]: E1029 00:42:52.461094 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bdqhh" podUID="1db4eddb-85a3-4ff2-842c-fb7c92440b55" Oct 29 00:42:53.514999 containerd[1577]: time="2025-10-29T00:42:53.514938605Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3f744bd0cdbf9c7b32ef74461d09bcaf90e60108e405cc543a315eea8fe4330d\" id:\"8fff140d8a6408bdd0dcf434824f9c80f6423d186317bffb355b1f700e4227f5\" pid:5252 exited_at:{seconds:1761698573 nanos:514417474}" Oct 29 00:42:54.119646 containerd[1577]: time="2025-10-29T00:42:54.119058513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 00:42:54.432591 systemd[1]: Started sshd@24-64.23.202.85:22-139.178.89.65:43346.service - OpenSSH per-connection server daemon (139.178.89.65:43346). Oct 29 00:42:54.453536 containerd[1577]: time="2025-10-29T00:42:54.453333070Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:54.455369 containerd[1577]: time="2025-10-29T00:42:54.455209434Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 00:42:54.455614 containerd[1577]: time="2025-10-29T00:42:54.455333987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 00:42:54.456055 kubelet[2754]: E1029 00:42:54.455936 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:42:54.456055 kubelet[2754]: E1029 00:42:54.455992 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:42:54.457684 kubelet[2754]: E1029 00:42:54.456080 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-8f59fd667-w8gqk_calico-system(8c11e02f-6f5f-41e7-8866-65b058b61720): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:54.457951 kubelet[2754]: E1029 00:42:54.457886 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8f59fd667-w8gqk" podUID="8c11e02f-6f5f-41e7-8866-65b058b61720" Oct 29 00:42:54.543417 sshd[5264]: Accepted publickey for core from 139.178.89.65 port 43346 ssh2: RSA SHA256:smEgY84YGI1KYS6vItm95Ji1wpEjlZC/2OCB3dco40g Oct 29 00:42:54.545390 sshd-session[5264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:54.555691 systemd-logind[1563]: New session 25 of user core. Oct 29 00:42:54.564654 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 29 00:42:54.797379 sshd[5267]: Connection closed by 139.178.89.65 port 43346 Oct 29 00:42:54.797190 sshd-session[5264]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:54.805182 systemd[1]: sshd@24-64.23.202.85:22-139.178.89.65:43346.service: Deactivated successfully. Oct 29 00:42:54.809942 systemd[1]: session-25.scope: Deactivated successfully. Oct 29 00:42:54.811004 systemd-logind[1563]: Session 25 logged out. Waiting for processes to exit. Oct 29 00:42:54.812824 systemd-logind[1563]: Removed session 25. Oct 29 00:42:56.119269 containerd[1577]: time="2025-10-29T00:42:56.119002308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:42:56.472612 containerd[1577]: time="2025-10-29T00:42:56.470664814Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:56.473527 containerd[1577]: time="2025-10-29T00:42:56.473438197Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:42:56.473708 containerd[1577]: time="2025-10-29T00:42:56.473635900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:42:56.473928 kubelet[2754]: E1029 00:42:56.473885 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:42:56.475687 kubelet[2754]: E1029 00:42:56.473938 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:42:56.475687 kubelet[2754]: E1029 00:42:56.474031 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-64df77bb46-p5v5r_calico-apiserver(973f8417-d690-432c-be64-a1fb3fd7b7ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:56.475687 kubelet[2754]: E1029 00:42:56.474066 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64df77bb46-p5v5r" podUID="973f8417-d690-432c-be64-a1fb3fd7b7ed" Oct 29 00:42:58.118562 containerd[1577]: time="2025-10-29T00:42:58.118509974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 00:42:58.462557 containerd[1577]: time="2025-10-29T00:42:58.461709711Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:58.463632 containerd[1577]: time="2025-10-29T00:42:58.463476108Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 00:42:58.463632 containerd[1577]: time="2025-10-29T00:42:58.463596141Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 00:42:58.466325 kubelet[2754]: E1029 00:42:58.465543 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:42:58.466325 kubelet[2754]: E1029 00:42:58.465613 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:42:58.466325 kubelet[2754]: E1029 00:42:58.465722 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7d8c4588fb-cgxbb_calico-system(fbcce175-ed1f-4ce1-b176-b5da4c1b969c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:58.468378 containerd[1577]: time="2025-10-29T00:42:58.468053682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 00:42:58.836426 containerd[1577]: time="2025-10-29T00:42:58.836236001Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:58.837133 containerd[1577]: time="2025-10-29T00:42:58.837010034Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 00:42:58.837133 containerd[1577]: time="2025-10-29T00:42:58.837105432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 00:42:58.837515 kubelet[2754]: E1029 00:42:58.837470 2754 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:42:58.837701 kubelet[2754]: E1029 00:42:58.837658 2754 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:42:58.837976 kubelet[2754]: E1029 00:42:58.837943 2754 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7d8c4588fb-cgxbb_calico-system(fbcce175-ed1f-4ce1-b176-b5da4c1b969c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:58.838269 kubelet[2754]: E1029 00:42:58.838227 2754 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d8c4588fb-cgxbb" podUID="fbcce175-ed1f-4ce1-b176-b5da4c1b969c" Oct 29 00:43:00.118206 kubelet[2754]: E1029 00:43:00.117763 2754 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"