Nov 4 23:53:22.126703 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 4 22:00:22 -00 2025 Nov 4 23:53:22.126752 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:53:22.126768 kernel: BIOS-provided physical RAM map: Nov 4 23:53:22.126776 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 4 23:53:22.126783 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 4 23:53:22.126790 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 4 23:53:22.126804 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Nov 4 23:53:22.126816 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Nov 4 23:53:22.126824 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 4 23:53:22.126834 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 4 23:53:22.126841 kernel: NX (Execute Disable) protection: active Nov 4 23:53:22.126848 kernel: APIC: Static calls initialized Nov 4 23:53:22.126856 kernel: SMBIOS 2.8 present. Nov 4 23:53:22.126864 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 4 23:53:22.126873 kernel: DMI: Memory slots populated: 1/1 Nov 4 23:53:22.126883 kernel: Hypervisor detected: KVM Nov 4 23:53:22.126895 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 4 23:53:22.126903 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 4 23:53:22.126911 kernel: kvm-clock: using sched offset of 3842865685 cycles Nov 4 23:53:22.128700 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 4 23:53:22.128716 kernel: tsc: Detected 2494.136 MHz processor Nov 4 23:53:22.128726 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 4 23:53:22.128735 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 4 23:53:22.128749 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 4 23:53:22.128759 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 4 23:53:22.128768 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 4 23:53:22.128777 kernel: ACPI: Early table checksum verification disabled Nov 4 23:53:22.128785 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Nov 4 23:53:22.128794 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:53:22.128804 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:53:22.128815 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:53:22.128824 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 4 23:53:22.128833 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:53:22.128842 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:53:22.128851 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:53:22.128859 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:53:22.128868 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 4 23:53:22.128880 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 4 23:53:22.128889 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 4 23:53:22.128897 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 4 23:53:22.128910 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 4 23:53:22.128919 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 4 23:53:22.128930 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 4 23:53:22.128939 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 4 23:53:22.128949 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 4 23:53:22.128958 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Nov 4 23:53:22.128967 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Nov 4 23:53:22.128976 kernel: Zone ranges: Nov 4 23:53:22.128988 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 4 23:53:22.128997 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Nov 4 23:53:22.129006 kernel: Normal empty Nov 4 23:53:22.129015 kernel: Device empty Nov 4 23:53:22.129024 kernel: Movable zone start for each node Nov 4 23:53:22.129033 kernel: Early memory node ranges Nov 4 23:53:22.129041 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 4 23:53:22.129050 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Nov 4 23:53:22.129062 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Nov 4 23:53:22.129071 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 4 23:53:22.129080 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 4 23:53:22.129089 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Nov 4 23:53:22.129098 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 4 23:53:22.129113 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 4 23:53:22.129122 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 4 23:53:22.129136 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 4 23:53:22.129145 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 4 23:53:22.129154 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 4 23:53:22.129166 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 4 23:53:22.129176 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 4 23:53:22.129185 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 4 23:53:22.129199 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 4 23:53:22.129218 kernel: TSC deadline timer available Nov 4 23:53:22.129231 kernel: CPU topo: Max. logical packages: 1 Nov 4 23:53:22.129244 kernel: CPU topo: Max. logical dies: 1 Nov 4 23:53:22.129257 kernel: CPU topo: Max. dies per package: 1 Nov 4 23:53:22.129270 kernel: CPU topo: Max. threads per core: 1 Nov 4 23:53:22.129282 kernel: CPU topo: Num. cores per package: 2 Nov 4 23:53:22.129294 kernel: CPU topo: Num. threads per package: 2 Nov 4 23:53:22.129306 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 4 23:53:22.129324 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 4 23:53:22.129337 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 4 23:53:22.129346 kernel: Booting paravirtualized kernel on KVM Nov 4 23:53:22.129356 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 4 23:53:22.129365 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 4 23:53:22.129374 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 4 23:53:22.129384 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 4 23:53:22.129396 kernel: pcpu-alloc: [0] 0 1 Nov 4 23:53:22.129405 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 4 23:53:22.129415 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:53:22.129455 kernel: random: crng init done Nov 4 23:53:22.129464 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 4 23:53:22.129474 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 4 23:53:22.129483 kernel: Fallback order for Node 0: 0 Nov 4 23:53:22.129494 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Nov 4 23:53:22.129503 kernel: Policy zone: DMA32 Nov 4 23:53:22.129513 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 4 23:53:22.129522 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 4 23:53:22.129531 kernel: Kernel/User page tables isolation: enabled Nov 4 23:53:22.129540 kernel: ftrace: allocating 40092 entries in 157 pages Nov 4 23:53:22.129549 kernel: ftrace: allocated 157 pages with 5 groups Nov 4 23:53:22.129567 kernel: Dynamic Preempt: voluntary Nov 4 23:53:22.129576 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 4 23:53:22.129586 kernel: rcu: RCU event tracing is enabled. Nov 4 23:53:22.129596 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 4 23:53:22.129605 kernel: Trampoline variant of Tasks RCU enabled. Nov 4 23:53:22.129618 kernel: Rude variant of Tasks RCU enabled. Nov 4 23:53:22.129627 kernel: Tracing variant of Tasks RCU enabled. Nov 4 23:53:22.129638 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 4 23:53:22.129648 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 4 23:53:22.129657 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 23:53:22.129680 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 23:53:22.129690 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 23:53:22.129699 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 4 23:53:22.129708 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 4 23:53:22.129721 kernel: Console: colour VGA+ 80x25 Nov 4 23:53:22.129730 kernel: printk: legacy console [tty0] enabled Nov 4 23:53:22.129761 kernel: printk: legacy console [ttyS0] enabled Nov 4 23:53:22.129771 kernel: ACPI: Core revision 20240827 Nov 4 23:53:22.129780 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 4 23:53:22.129798 kernel: APIC: Switch to symmetric I/O mode setup Nov 4 23:53:22.129810 kernel: x2apic enabled Nov 4 23:53:22.129820 kernel: APIC: Switched APIC routing to: physical x2apic Nov 4 23:53:22.129830 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 4 23:53:22.129840 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39654230, max_idle_ns: 440795207432 ns Nov 4 23:53:22.129854 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494136) Nov 4 23:53:22.129864 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 4 23:53:22.129874 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 4 23:53:22.129883 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 4 23:53:22.129896 kernel: Spectre V2 : Mitigation: Retpolines Nov 4 23:53:22.129906 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 4 23:53:22.129915 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 4 23:53:22.129925 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 4 23:53:22.129934 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 4 23:53:22.129944 kernel: MDS: Mitigation: Clear CPU buffers Nov 4 23:53:22.129954 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 4 23:53:22.129966 kernel: active return thunk: its_return_thunk Nov 4 23:53:22.129994 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 4 23:53:22.130005 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 4 23:53:22.130015 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 4 23:53:22.130024 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 4 23:53:22.130034 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 4 23:53:22.130043 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 4 23:53:22.130069 kernel: Freeing SMP alternatives memory: 32K Nov 4 23:53:22.130085 kernel: pid_max: default: 32768 minimum: 301 Nov 4 23:53:22.130097 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 4 23:53:22.130111 kernel: landlock: Up and running. Nov 4 23:53:22.130125 kernel: SELinux: Initializing. Nov 4 23:53:22.130138 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 4 23:53:22.130150 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 4 23:53:22.130168 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 4 23:53:22.130183 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 4 23:53:22.130194 kernel: signal: max sigframe size: 1776 Nov 4 23:53:22.130204 kernel: rcu: Hierarchical SRCU implementation. Nov 4 23:53:22.130214 kernel: rcu: Max phase no-delay instances is 400. Nov 4 23:53:22.130224 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 4 23:53:22.130233 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 4 23:53:22.130247 kernel: smp: Bringing up secondary CPUs ... Nov 4 23:53:22.130260 kernel: smpboot: x86: Booting SMP configuration: Nov 4 23:53:22.130270 kernel: .... node #0, CPUs: #1 Nov 4 23:53:22.130279 kernel: smp: Brought up 1 node, 2 CPUs Nov 4 23:53:22.130289 kernel: smpboot: Total of 2 processors activated (9976.54 BogoMIPS) Nov 4 23:53:22.130300 kernel: Memory: 1989436K/2096612K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15936K init, 2108K bss, 102612K reserved, 0K cma-reserved) Nov 4 23:53:22.130310 kernel: devtmpfs: initialized Nov 4 23:53:22.130322 kernel: x86/mm: Memory block size: 128MB Nov 4 23:53:22.130332 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 4 23:53:22.130341 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 4 23:53:22.130351 kernel: pinctrl core: initialized pinctrl subsystem Nov 4 23:53:22.130361 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 4 23:53:22.130370 kernel: audit: initializing netlink subsys (disabled) Nov 4 23:53:22.130380 kernel: audit: type=2000 audit(1762300399.375:1): state=initialized audit_enabled=0 res=1 Nov 4 23:53:22.130393 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 4 23:53:22.130405 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 4 23:53:22.130422 kernel: cpuidle: using governor menu Nov 4 23:53:22.130436 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 4 23:53:22.130472 kernel: dca service started, version 1.12.1 Nov 4 23:53:22.130486 kernel: PCI: Using configuration type 1 for base access Nov 4 23:53:22.130499 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 4 23:53:22.130518 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 4 23:53:22.130548 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 4 23:53:22.130559 kernel: ACPI: Added _OSI(Module Device) Nov 4 23:53:22.130568 kernel: ACPI: Added _OSI(Processor Device) Nov 4 23:53:22.130578 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 4 23:53:22.130587 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 4 23:53:22.130597 kernel: ACPI: Interpreter enabled Nov 4 23:53:22.130610 kernel: ACPI: PM: (supports S0 S5) Nov 4 23:53:22.130640 kernel: ACPI: Using IOAPIC for interrupt routing Nov 4 23:53:22.130655 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 4 23:53:22.130691 kernel: PCI: Using E820 reservations for host bridge windows Nov 4 23:53:22.130702 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 4 23:53:22.130712 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 4 23:53:22.131006 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 4 23:53:22.131216 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 4 23:53:22.131356 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 4 23:53:22.131375 kernel: acpiphp: Slot [3] registered Nov 4 23:53:22.131385 kernel: acpiphp: Slot [4] registered Nov 4 23:53:22.131395 kernel: acpiphp: Slot [5] registered Nov 4 23:53:22.131405 kernel: acpiphp: Slot [6] registered Nov 4 23:53:22.131420 kernel: acpiphp: Slot [7] registered Nov 4 23:53:22.131429 kernel: acpiphp: Slot [8] registered Nov 4 23:53:22.131439 kernel: acpiphp: Slot [9] registered Nov 4 23:53:22.131449 kernel: acpiphp: Slot [10] registered Nov 4 23:53:22.131459 kernel: acpiphp: Slot [11] registered Nov 4 23:53:22.131468 kernel: acpiphp: Slot [12] registered Nov 4 23:53:22.131478 kernel: acpiphp: Slot [13] registered Nov 4 23:53:22.131490 kernel: acpiphp: Slot [14] registered Nov 4 23:53:22.131499 kernel: acpiphp: Slot [15] registered Nov 4 23:53:22.131509 kernel: acpiphp: Slot [16] registered Nov 4 23:53:22.131519 kernel: acpiphp: Slot [17] registered Nov 4 23:53:22.131529 kernel: acpiphp: Slot [18] registered Nov 4 23:53:22.131538 kernel: acpiphp: Slot [19] registered Nov 4 23:53:22.131547 kernel: acpiphp: Slot [20] registered Nov 4 23:53:22.131557 kernel: acpiphp: Slot [21] registered Nov 4 23:53:22.131576 kernel: acpiphp: Slot [22] registered Nov 4 23:53:22.131586 kernel: acpiphp: Slot [23] registered Nov 4 23:53:22.131595 kernel: acpiphp: Slot [24] registered Nov 4 23:53:22.131605 kernel: acpiphp: Slot [25] registered Nov 4 23:53:22.131614 kernel: acpiphp: Slot [26] registered Nov 4 23:53:22.131624 kernel: acpiphp: Slot [27] registered Nov 4 23:53:22.131634 kernel: acpiphp: Slot [28] registered Nov 4 23:53:22.131646 kernel: acpiphp: Slot [29] registered Nov 4 23:53:22.131655 kernel: acpiphp: Slot [30] registered Nov 4 23:53:22.131688 kernel: acpiphp: Slot [31] registered Nov 4 23:53:22.131697 kernel: PCI host bridge to bus 0000:00 Nov 4 23:53:22.131846 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 4 23:53:22.131974 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 4 23:53:22.132100 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 4 23:53:22.132237 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 4 23:53:22.132355 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 4 23:53:22.132542 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 4 23:53:22.132767 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Nov 4 23:53:22.132928 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Nov 4 23:53:22.133141 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Nov 4 23:53:22.133371 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Nov 4 23:53:22.133518 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Nov 4 23:53:22.133649 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Nov 4 23:53:22.133871 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Nov 4 23:53:22.134004 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Nov 4 23:53:22.134207 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Nov 4 23:53:22.134369 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Nov 4 23:53:22.134553 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Nov 4 23:53:22.134736 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 4 23:53:22.134870 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 4 23:53:22.135017 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Nov 4 23:53:22.135148 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Nov 4 23:53:22.135360 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Nov 4 23:53:22.135492 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Nov 4 23:53:22.135658 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Nov 4 23:53:22.135852 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 4 23:53:22.136013 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 4 23:53:22.136161 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Nov 4 23:53:22.136295 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Nov 4 23:53:22.136473 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Nov 4 23:53:22.136619 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 4 23:53:22.136786 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Nov 4 23:53:22.136917 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Nov 4 23:53:22.137114 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 4 23:53:22.137299 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Nov 4 23:53:22.137460 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Nov 4 23:53:22.137628 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Nov 4 23:53:22.137809 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 4 23:53:22.137989 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 4 23:53:22.138122 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Nov 4 23:53:22.138333 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Nov 4 23:53:22.138491 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Nov 4 23:53:22.138655 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 4 23:53:22.138850 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Nov 4 23:53:22.139012 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Nov 4 23:53:22.139156 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Nov 4 23:53:22.139310 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Nov 4 23:53:22.139502 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Nov 4 23:53:22.139651 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 4 23:53:22.139693 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 4 23:53:22.139704 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 4 23:53:22.139714 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 4 23:53:22.139724 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 4 23:53:22.139734 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 4 23:53:22.139754 kernel: iommu: Default domain type: Translated Nov 4 23:53:22.139786 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 4 23:53:22.139797 kernel: PCI: Using ACPI for IRQ routing Nov 4 23:53:22.139806 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 4 23:53:22.139816 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 4 23:53:22.139827 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Nov 4 23:53:22.139985 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 4 23:53:22.140200 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 4 23:53:22.140382 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 4 23:53:22.140395 kernel: vgaarb: loaded Nov 4 23:53:22.140406 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 4 23:53:22.140416 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 4 23:53:22.140426 kernel: clocksource: Switched to clocksource kvm-clock Nov 4 23:53:22.140436 kernel: VFS: Disk quotas dquot_6.6.0 Nov 4 23:53:22.140450 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 4 23:53:22.140459 kernel: pnp: PnP ACPI init Nov 4 23:53:22.140469 kernel: pnp: PnP ACPI: found 4 devices Nov 4 23:53:22.140479 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 4 23:53:22.140489 kernel: NET: Registered PF_INET protocol family Nov 4 23:53:22.140499 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 4 23:53:22.140509 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 4 23:53:22.140537 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 4 23:53:22.140547 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 4 23:53:22.140556 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 4 23:53:22.140567 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 4 23:53:22.140577 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 4 23:53:22.140587 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 4 23:53:22.140597 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 4 23:53:22.140611 kernel: NET: Registered PF_XDP protocol family Nov 4 23:53:22.140778 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 4 23:53:22.140932 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 4 23:53:22.141054 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 4 23:53:22.141173 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 4 23:53:22.141305 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 4 23:53:22.141449 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 4 23:53:22.141591 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 4 23:53:22.141605 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 4 23:53:22.141759 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 26869 usecs Nov 4 23:53:22.141773 kernel: PCI: CLS 0 bytes, default 64 Nov 4 23:53:22.141784 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 4 23:53:22.141795 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39654230, max_idle_ns: 440795207432 ns Nov 4 23:53:22.141808 kernel: Initialise system trusted keyrings Nov 4 23:53:22.141819 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 4 23:53:22.141829 kernel: Key type asymmetric registered Nov 4 23:53:22.141839 kernel: Asymmetric key parser 'x509' registered Nov 4 23:53:22.141849 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 4 23:53:22.141859 kernel: io scheduler mq-deadline registered Nov 4 23:53:22.141868 kernel: io scheduler kyber registered Nov 4 23:53:22.141878 kernel: io scheduler bfq registered Nov 4 23:53:22.141891 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 4 23:53:22.141902 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 4 23:53:22.141911 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 4 23:53:22.141924 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 4 23:53:22.141940 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 4 23:53:22.141951 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 4 23:53:22.141961 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 4 23:53:22.141973 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 4 23:53:22.141983 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 4 23:53:22.142179 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 4 23:53:22.142196 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 4 23:53:22.142341 kernel: rtc_cmos 00:03: registered as rtc0 Nov 4 23:53:22.142469 kernel: rtc_cmos 00:03: setting system clock to 2025-11-04T23:53:20 UTC (1762300400) Nov 4 23:53:22.142600 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 4 23:53:22.142753 kernel: intel_pstate: CPU model not supported Nov 4 23:53:22.142766 kernel: NET: Registered PF_INET6 protocol family Nov 4 23:53:22.142775 kernel: Segment Routing with IPv6 Nov 4 23:53:22.142786 kernel: In-situ OAM (IOAM) with IPv6 Nov 4 23:53:22.142796 kernel: NET: Registered PF_PACKET protocol family Nov 4 23:53:22.142806 kernel: Key type dns_resolver registered Nov 4 23:53:22.142822 kernel: IPI shorthand broadcast: enabled Nov 4 23:53:22.142832 kernel: sched_clock: Marking stable (1362003767, 155499396)->(1541406354, -23903191) Nov 4 23:53:22.142841 kernel: registered taskstats version 1 Nov 4 23:53:22.142851 kernel: Loading compiled-in X.509 certificates Nov 4 23:53:22.142861 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: ace064fb6689a15889f35c6439909c760a72ef44' Nov 4 23:53:22.142872 kernel: Demotion targets for Node 0: null Nov 4 23:53:22.142881 kernel: Key type .fscrypt registered Nov 4 23:53:22.142894 kernel: Key type fscrypt-provisioning registered Nov 4 23:53:22.142923 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 4 23:53:22.142936 kernel: ima: Allocated hash algorithm: sha1 Nov 4 23:53:22.142946 kernel: ima: No architecture policies found Nov 4 23:53:22.142957 kernel: clk: Disabling unused clocks Nov 4 23:53:22.142967 kernel: Freeing unused kernel image (initmem) memory: 15936K Nov 4 23:53:22.142977 kernel: Write protecting the kernel read-only data: 40960k Nov 4 23:53:22.142991 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 4 23:53:22.143001 kernel: Run /init as init process Nov 4 23:53:22.143011 kernel: with arguments: Nov 4 23:53:22.143021 kernel: /init Nov 4 23:53:22.143031 kernel: with environment: Nov 4 23:53:22.143042 kernel: HOME=/ Nov 4 23:53:22.143051 kernel: TERM=linux Nov 4 23:53:22.143062 kernel: SCSI subsystem initialized Nov 4 23:53:22.143075 kernel: libata version 3.00 loaded. Nov 4 23:53:22.143246 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 4 23:53:22.143405 kernel: scsi host0: ata_piix Nov 4 23:53:22.143548 kernel: scsi host1: ata_piix Nov 4 23:53:22.143563 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Nov 4 23:53:22.143577 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Nov 4 23:53:22.143587 kernel: ACPI: bus type USB registered Nov 4 23:53:22.143597 kernel: usbcore: registered new interface driver usbfs Nov 4 23:53:22.143633 kernel: usbcore: registered new interface driver hub Nov 4 23:53:22.143650 kernel: usbcore: registered new device driver usb Nov 4 23:53:22.143848 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 4 23:53:22.143985 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 4 23:53:22.144142 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 4 23:53:22.144311 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Nov 4 23:53:22.144515 kernel: hub 1-0:1.0: USB hub found Nov 4 23:53:22.144655 kernel: hub 1-0:1.0: 2 ports detected Nov 4 23:53:22.144854 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Nov 4 23:53:22.144986 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 4 23:53:22.145000 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 4 23:53:22.145011 kernel: GPT:16515071 != 125829119 Nov 4 23:53:22.145021 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 4 23:53:22.145034 kernel: GPT:16515071 != 125829119 Nov 4 23:53:22.145044 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 4 23:53:22.145057 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 4 23:53:22.145242 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Nov 4 23:53:22.145373 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Nov 4 23:53:22.145509 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Nov 4 23:53:22.145688 kernel: scsi host2: Virtio SCSI HBA Nov 4 23:53:22.145703 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 4 23:53:22.145714 kernel: device-mapper: uevent: version 1.0.3 Nov 4 23:53:22.145725 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 4 23:53:22.145741 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 4 23:53:22.145757 kernel: raid6: avx2x4 gen() 19678 MB/s Nov 4 23:53:22.145767 kernel: raid6: avx2x2 gen() 23553 MB/s Nov 4 23:53:22.145781 kernel: raid6: avx2x1 gen() 19111 MB/s Nov 4 23:53:22.145792 kernel: raid6: using algorithm avx2x2 gen() 23553 MB/s Nov 4 23:53:22.145802 kernel: raid6: .... xor() 19653 MB/s, rmw enabled Nov 4 23:53:22.145813 kernel: raid6: using avx2x2 recovery algorithm Nov 4 23:53:22.145823 kernel: xor: automatically using best checksumming function avx Nov 4 23:53:22.145834 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 4 23:53:22.145844 kernel: BTRFS: device fsid f719dc90-1cf7-4f08-a80f-0dda441372cc devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (162) Nov 4 23:53:22.145858 kernel: BTRFS info (device dm-0): first mount of filesystem f719dc90-1cf7-4f08-a80f-0dda441372cc Nov 4 23:53:22.145868 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:53:22.145878 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 4 23:53:22.145889 kernel: BTRFS info (device dm-0): enabling free space tree Nov 4 23:53:22.145899 kernel: loop: module loaded Nov 4 23:53:22.145910 kernel: loop0: detected capacity change from 0 to 100120 Nov 4 23:53:22.145920 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 4 23:53:22.145934 systemd[1]: Successfully made /usr/ read-only. Nov 4 23:53:22.145948 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 23:53:22.145960 systemd[1]: Detected virtualization kvm. Nov 4 23:53:22.145970 systemd[1]: Detected architecture x86-64. Nov 4 23:53:22.145980 systemd[1]: Running in initrd. Nov 4 23:53:22.145990 systemd[1]: No hostname configured, using default hostname. Nov 4 23:53:22.146007 systemd[1]: Hostname set to . Nov 4 23:53:22.146022 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 23:53:22.146048 systemd[1]: Queued start job for default target initrd.target. Nov 4 23:53:22.146062 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 23:53:22.146073 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:53:22.146084 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:53:22.146101 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 4 23:53:22.146112 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 23:53:22.146145 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 4 23:53:22.146157 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 4 23:53:22.146168 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:53:22.146178 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:53:22.146192 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 4 23:53:22.146204 systemd[1]: Reached target paths.target - Path Units. Nov 4 23:53:22.146221 systemd[1]: Reached target slices.target - Slice Units. Nov 4 23:53:22.146237 systemd[1]: Reached target swap.target - Swaps. Nov 4 23:53:22.146251 systemd[1]: Reached target timers.target - Timer Units. Nov 4 23:53:22.146268 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 23:53:22.146286 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 23:53:22.146329 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 4 23:53:22.146349 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 4 23:53:22.146363 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:53:22.146374 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 23:53:22.146384 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:53:22.146395 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 23:53:22.146406 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 4 23:53:22.146419 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 4 23:53:22.146430 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 23:53:22.146441 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 4 23:53:22.146457 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 4 23:53:22.146471 systemd[1]: Starting systemd-fsck-usr.service... Nov 4 23:53:22.146481 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 23:53:22.146498 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 23:53:22.146514 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:53:22.146530 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 4 23:53:22.146547 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:53:22.146588 systemd[1]: Finished systemd-fsck-usr.service. Nov 4 23:53:22.146601 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 23:53:22.146736 systemd-journald[295]: Collecting audit messages is disabled. Nov 4 23:53:22.146768 systemd-journald[295]: Journal started Nov 4 23:53:22.146790 systemd-journald[295]: Runtime Journal (/run/log/journal/1a5d43b29f2c46b499538c4ade7f204f) is 4.9M, max 39.2M, 34.3M free. Nov 4 23:53:22.149490 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 23:53:22.157791 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 23:53:22.165868 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 23:53:22.171583 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 23:53:22.187160 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 4 23:53:22.189026 systemd-modules-load[298]: Inserted module 'br_netfilter' Nov 4 23:53:22.249822 kernel: Bridge firewalling registered Nov 4 23:53:22.194341 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 23:53:22.197578 systemd-tmpfiles[311]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 4 23:53:22.250905 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:53:22.252975 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:53:22.254127 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:53:22.257931 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 4 23:53:22.259878 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:53:22.287459 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:53:22.291996 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 23:53:22.303462 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 23:53:22.307364 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 4 23:53:22.349417 dracut-cmdline[339]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:53:22.359384 systemd-resolved[330]: Positive Trust Anchors: Nov 4 23:53:22.359400 systemd-resolved[330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 23:53:22.359405 systemd-resolved[330]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 23:53:22.359443 systemd-resolved[330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 23:53:22.397645 systemd-resolved[330]: Defaulting to hostname 'linux'. Nov 4 23:53:22.399594 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 23:53:22.400545 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:53:22.515702 kernel: Loading iSCSI transport class v2.0-870. Nov 4 23:53:22.537240 kernel: iscsi: registered transport (tcp) Nov 4 23:53:22.563863 kernel: iscsi: registered transport (qla4xxx) Nov 4 23:53:22.564001 kernel: QLogic iSCSI HBA Driver Nov 4 23:53:22.603398 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 23:53:22.625999 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:53:22.627371 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 23:53:22.703146 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 4 23:53:22.707428 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 4 23:53:22.710863 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 4 23:53:22.758641 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 4 23:53:22.762853 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:53:22.805391 systemd-udevd[573]: Using default interface naming scheme 'v257'. Nov 4 23:53:22.818878 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:53:22.823113 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 4 23:53:22.863991 dracut-pre-trigger[648]: rd.md=0: removing MD RAID activation Nov 4 23:53:22.869767 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 23:53:22.876401 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 23:53:22.922434 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 23:53:22.933904 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 23:53:22.958981 systemd-networkd[688]: lo: Link UP Nov 4 23:53:22.958995 systemd-networkd[688]: lo: Gained carrier Nov 4 23:53:22.960779 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 23:53:22.963044 systemd[1]: Reached target network.target - Network. Nov 4 23:53:23.058694 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:53:23.064505 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 4 23:53:23.225160 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 4 23:53:23.245047 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 4 23:53:23.266981 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 23:53:23.315275 kernel: cryptd: max_cpu_qlen set to 1000 Nov 4 23:53:23.326649 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:53:23.328055 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:53:23.330776 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:53:23.336247 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:53:23.368733 systemd-networkd[688]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Nov 4 23:53:23.368775 systemd-networkd[688]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 4 23:53:23.376171 systemd-networkd[688]: eth0: Link UP Nov 4 23:53:23.382775 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 4 23:53:23.376604 systemd-networkd[688]: eth0: Gained carrier Nov 4 23:53:23.376630 systemd-networkd[688]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Nov 4 23:53:23.393197 systemd-networkd[688]: eth0: DHCPv4 address 137.184.235.85/20, gateway 137.184.224.1 acquired from 169.254.169.253 Nov 4 23:53:23.400394 kernel: AES CTR mode by8 optimization enabled Nov 4 23:53:23.406398 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 4 23:53:23.418192 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 4 23:53:23.454266 systemd-networkd[688]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:53:23.456119 systemd-networkd[688]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 23:53:23.459055 systemd-networkd[688]: eth1: Link UP Nov 4 23:53:23.459410 systemd-networkd[688]: eth1: Gained carrier Nov 4 23:53:23.459430 systemd-networkd[688]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:53:23.472700 disk-uuid[814]: Primary Header is updated. Nov 4 23:53:23.472700 disk-uuid[814]: Secondary Entries is updated. Nov 4 23:53:23.472700 disk-uuid[814]: Secondary Header is updated. Nov 4 23:53:23.475032 systemd-networkd[688]: eth1: DHCPv4 address 10.124.0.14/20 acquired from 169.254.169.253 Nov 4 23:53:23.487370 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 4 23:53:23.597088 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:53:23.657206 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 23:53:23.658913 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:53:23.659646 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 23:53:23.662969 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 4 23:53:23.693913 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 4 23:53:24.528151 disk-uuid[817]: Warning: The kernel is still using the old partition table. Nov 4 23:53:24.528151 disk-uuid[817]: The new table will be used at the next reboot or after you Nov 4 23:53:24.528151 disk-uuid[817]: run partprobe(8) or kpartx(8) Nov 4 23:53:24.528151 disk-uuid[817]: The operation has completed successfully. Nov 4 23:53:24.535257 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 4 23:53:24.535452 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 4 23:53:24.538108 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 4 23:53:24.586728 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (839) Nov 4 23:53:24.589740 kernel: BTRFS info (device vda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:53:24.592692 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:53:24.598262 kernel: BTRFS info (device vda6): turning on async discard Nov 4 23:53:24.598348 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 23:53:24.608776 kernel: BTRFS info (device vda6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:53:24.610103 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 4 23:53:24.614033 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 4 23:53:24.686186 systemd-networkd[688]: eth1: Gained IPv6LL Nov 4 23:53:24.873272 ignition[858]: Ignition 2.22.0 Nov 4 23:53:24.874587 ignition[858]: Stage: fetch-offline Nov 4 23:53:24.874681 ignition[858]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:53:24.874699 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 4 23:53:24.874877 ignition[858]: parsed url from cmdline: "" Nov 4 23:53:24.877561 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 23:53:24.874883 ignition[858]: no config URL provided Nov 4 23:53:24.874892 ignition[858]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 23:53:24.874907 ignition[858]: no config at "/usr/lib/ignition/user.ign" Nov 4 23:53:24.874916 ignition[858]: failed to fetch config: resource requires networking Nov 4 23:53:24.880909 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 4 23:53:24.876050 ignition[858]: Ignition finished successfully Nov 4 23:53:24.944269 ignition[864]: Ignition 2.22.0 Nov 4 23:53:24.944285 ignition[864]: Stage: fetch Nov 4 23:53:24.944470 ignition[864]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:53:24.944482 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 4 23:53:24.944619 ignition[864]: parsed url from cmdline: "" Nov 4 23:53:24.944625 ignition[864]: no config URL provided Nov 4 23:53:24.944631 ignition[864]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 23:53:24.944640 ignition[864]: no config at "/usr/lib/ignition/user.ign" Nov 4 23:53:24.944683 ignition[864]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 4 23:53:24.980637 ignition[864]: GET result: OK Nov 4 23:53:24.981685 ignition[864]: parsing config with SHA512: a68dc93b69349ee71ba4748a16d41c12303e16938949a4f9c6d9c1eb9143edba724e7214df8d24d858e4cfb8677dc75ef0b28d6bc8e008a8c458cef81f145ea8 Nov 4 23:53:24.989342 unknown[864]: fetched base config from "system" Nov 4 23:53:24.989376 unknown[864]: fetched base config from "system" Nov 4 23:53:24.990420 ignition[864]: fetch: fetch complete Nov 4 23:53:24.989387 unknown[864]: fetched user config from "digitalocean" Nov 4 23:53:24.990431 ignition[864]: fetch: fetch passed Nov 4 23:53:24.990534 ignition[864]: Ignition finished successfully Nov 4 23:53:24.994471 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 4 23:53:24.997521 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 4 23:53:25.037534 ignition[870]: Ignition 2.22.0 Nov 4 23:53:25.037550 ignition[870]: Stage: kargs Nov 4 23:53:25.037771 ignition[870]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:53:25.037782 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 4 23:53:25.039057 ignition[870]: kargs: kargs passed Nov 4 23:53:25.039124 ignition[870]: Ignition finished successfully Nov 4 23:53:25.042612 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 4 23:53:25.044676 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 4 23:53:25.098959 ignition[877]: Ignition 2.22.0 Nov 4 23:53:25.098999 ignition[877]: Stage: disks Nov 4 23:53:25.099275 ignition[877]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:53:25.099292 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 4 23:53:25.101983 ignition[877]: disks: disks passed Nov 4 23:53:25.102056 ignition[877]: Ignition finished successfully Nov 4 23:53:25.106213 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 4 23:53:25.110068 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 4 23:53:25.110842 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 4 23:53:25.111901 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 23:53:25.113061 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 23:53:25.113937 systemd[1]: Reached target basic.target - Basic System. Nov 4 23:53:25.116650 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 4 23:53:25.166270 systemd-fsck[886]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 4 23:53:25.172644 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 4 23:53:25.178834 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 4 23:53:25.353569 kernel: EXT4-fs (vda9): mounted filesystem cfb29ed0-6faf-41a8-b421-3abc514e4975 r/w with ordered data mode. Quota mode: none. Nov 4 23:53:25.353039 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 4 23:53:25.355354 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 4 23:53:25.359087 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 23:53:25.361918 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 4 23:53:25.367873 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Nov 4 23:53:25.376174 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 4 23:53:25.380503 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 4 23:53:25.380569 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 23:53:25.387796 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (894) Nov 4 23:53:25.387290 systemd-networkd[688]: eth0: Gained IPv6LL Nov 4 23:53:25.392961 kernel: BTRFS info (device vda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:53:25.393334 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 4 23:53:25.411631 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:53:25.411691 kernel: BTRFS info (device vda6): turning on async discard Nov 4 23:53:25.411708 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 23:53:25.413062 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 23:53:25.420860 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 4 23:53:25.499042 coreos-metadata[897]: Nov 04 23:53:25.498 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 4 23:53:25.514322 coreos-metadata[896]: Nov 04 23:53:25.513 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 4 23:53:25.515857 coreos-metadata[897]: Nov 04 23:53:25.515 INFO Fetch successful Nov 4 23:53:25.520352 initrd-setup-root[924]: cut: /sysroot/etc/passwd: No such file or directory Nov 4 23:53:25.522856 coreos-metadata[897]: Nov 04 23:53:25.522 INFO wrote hostname ci-4487.0.0-n-b9f348caa0 to /sysroot/etc/hostname Nov 4 23:53:25.524936 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 4 23:53:25.530122 coreos-metadata[896]: Nov 04 23:53:25.527 INFO Fetch successful Nov 4 23:53:25.532703 initrd-setup-root[932]: cut: /sysroot/etc/group: No such file or directory Nov 4 23:53:25.536424 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Nov 4 23:53:25.537197 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Nov 4 23:53:25.544770 initrd-setup-root[940]: cut: /sysroot/etc/shadow: No such file or directory Nov 4 23:53:25.550636 initrd-setup-root[947]: cut: /sysroot/etc/gshadow: No such file or directory Nov 4 23:53:25.688883 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 4 23:53:25.691564 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 4 23:53:25.693167 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 4 23:53:25.730910 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 4 23:53:25.732681 kernel: BTRFS info (device vda6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:53:25.761545 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 4 23:53:25.785897 ignition[1015]: INFO : Ignition 2.22.0 Nov 4 23:53:25.785897 ignition[1015]: INFO : Stage: mount Nov 4 23:53:25.787343 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:53:25.787343 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 4 23:53:25.789390 ignition[1015]: INFO : mount: mount passed Nov 4 23:53:25.789913 ignition[1015]: INFO : Ignition finished successfully Nov 4 23:53:25.792265 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 4 23:53:25.794028 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 4 23:53:25.818394 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 23:53:25.843694 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1027) Nov 4 23:53:25.846731 kernel: BTRFS info (device vda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:53:25.846831 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:53:25.851997 kernel: BTRFS info (device vda6): turning on async discard Nov 4 23:53:25.852084 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 23:53:25.854601 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 23:53:25.903993 ignition[1044]: INFO : Ignition 2.22.0 Nov 4 23:53:25.903993 ignition[1044]: INFO : Stage: files Nov 4 23:53:25.905518 ignition[1044]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:53:25.905518 ignition[1044]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 4 23:53:25.907046 ignition[1044]: DEBUG : files: compiled without relabeling support, skipping Nov 4 23:53:25.908145 ignition[1044]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 4 23:53:25.908145 ignition[1044]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 4 23:53:25.914998 ignition[1044]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 4 23:53:25.916247 ignition[1044]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 4 23:53:25.917014 ignition[1044]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 4 23:53:25.916955 unknown[1044]: wrote ssh authorized keys file for user: core Nov 4 23:53:25.918741 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 23:53:25.918741 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 4 23:53:26.027326 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 4 23:53:26.073741 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 23:53:26.073741 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 4 23:53:26.076024 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 4 23:53:26.076024 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 4 23:53:26.076024 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 4 23:53:26.076024 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 23:53:26.076024 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 23:53:26.076024 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 23:53:26.076024 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 23:53:26.081489 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 23:53:26.081489 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 23:53:26.081489 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 4 23:53:26.081489 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 4 23:53:26.081489 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 4 23:53:26.081489 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 4 23:53:26.482964 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 4 23:53:27.038076 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 4 23:53:27.039458 ignition[1044]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 4 23:53:27.040288 ignition[1044]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 23:53:27.041614 ignition[1044]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 23:53:27.041614 ignition[1044]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 4 23:53:27.041614 ignition[1044]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 4 23:53:27.044348 ignition[1044]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 4 23:53:27.044348 ignition[1044]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 4 23:53:27.044348 ignition[1044]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 4 23:53:27.044348 ignition[1044]: INFO : files: files passed Nov 4 23:53:27.044348 ignition[1044]: INFO : Ignition finished successfully Nov 4 23:53:27.044428 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 4 23:53:27.046843 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 4 23:53:27.051901 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 4 23:53:27.071302 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 4 23:53:27.071480 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 4 23:53:27.084865 initrd-setup-root-after-ignition[1075]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:53:27.084865 initrd-setup-root-after-ignition[1075]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:53:27.086592 initrd-setup-root-after-ignition[1079]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:53:27.088870 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 23:53:27.090422 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 4 23:53:27.093406 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 4 23:53:27.146413 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 4 23:53:27.146598 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 4 23:53:27.148273 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 4 23:53:27.148820 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 4 23:53:27.150101 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 4 23:53:27.151488 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 4 23:53:27.185412 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 23:53:27.188325 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 4 23:53:27.219565 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 23:53:27.221062 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:53:27.221853 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:53:27.223201 systemd[1]: Stopped target timers.target - Timer Units. Nov 4 23:53:27.224416 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 4 23:53:27.224770 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 23:53:27.226108 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 4 23:53:27.226892 systemd[1]: Stopped target basic.target - Basic System. Nov 4 23:53:27.227646 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 4 23:53:27.228567 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 23:53:27.229595 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 4 23:53:27.230885 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 4 23:53:27.231911 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 4 23:53:27.233263 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 23:53:27.234687 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 4 23:53:27.236065 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 4 23:53:27.237140 systemd[1]: Stopped target swap.target - Swaps. Nov 4 23:53:27.238127 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 4 23:53:27.238635 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 4 23:53:27.240756 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:53:27.241745 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:53:27.242828 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 4 23:53:27.243088 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:53:27.244014 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 4 23:53:27.244215 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 4 23:53:27.245510 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 4 23:53:27.245764 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 23:53:27.247155 systemd[1]: ignition-files.service: Deactivated successfully. Nov 4 23:53:27.247317 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 4 23:53:27.248121 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 4 23:53:27.248279 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 4 23:53:27.250884 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 4 23:53:27.253128 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 4 23:53:27.253437 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:53:27.259092 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 4 23:53:27.260918 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 4 23:53:27.262967 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:53:27.264095 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 4 23:53:27.264324 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:53:27.267519 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 4 23:53:27.267849 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 23:53:27.281097 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 4 23:53:27.284814 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 4 23:53:27.317196 ignition[1099]: INFO : Ignition 2.22.0 Nov 4 23:53:27.317196 ignition[1099]: INFO : Stage: umount Nov 4 23:53:27.318641 ignition[1099]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:53:27.318641 ignition[1099]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 4 23:53:27.381226 ignition[1099]: INFO : umount: umount passed Nov 4 23:53:27.381226 ignition[1099]: INFO : Ignition finished successfully Nov 4 23:53:27.318908 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 4 23:53:27.387314 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 4 23:53:27.389958 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 4 23:53:27.394260 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 4 23:53:27.394491 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 4 23:53:27.397383 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 4 23:53:27.397472 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 4 23:53:27.398926 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 4 23:53:27.399006 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 4 23:53:27.405204 systemd[1]: Stopped target network.target - Network. Nov 4 23:53:27.405682 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 4 23:53:27.405946 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 23:53:27.407212 systemd[1]: Stopped target paths.target - Path Units. Nov 4 23:53:27.408091 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 4 23:53:27.408542 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:53:27.409117 systemd[1]: Stopped target slices.target - Slice Units. Nov 4 23:53:27.411833 systemd[1]: Stopped target sockets.target - Socket Units. Nov 4 23:53:27.413726 systemd[1]: iscsid.socket: Deactivated successfully. Nov 4 23:53:27.413815 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 23:53:27.415335 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 4 23:53:27.415386 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 23:53:27.417020 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 4 23:53:27.417106 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 4 23:53:27.418801 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 4 23:53:27.418890 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 4 23:53:27.420550 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 4 23:53:27.423551 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 4 23:53:27.427275 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 4 23:53:27.431097 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 4 23:53:27.440095 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 4 23:53:27.440328 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 4 23:53:27.448141 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 4 23:53:27.448362 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 4 23:53:27.462495 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 4 23:53:27.462703 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 4 23:53:27.486500 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 4 23:53:27.491373 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 4 23:53:27.491466 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:53:27.494174 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 4 23:53:27.494726 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 4 23:53:27.494813 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 23:53:27.495934 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 23:53:27.496003 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:53:27.497546 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 4 23:53:27.497597 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 4 23:53:27.500643 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:53:27.520489 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 4 23:53:27.520780 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:53:27.527533 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 4 23:53:27.527695 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 4 23:53:27.529263 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 4 23:53:27.529319 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:53:27.531651 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 4 23:53:27.531846 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 4 23:53:27.535475 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 4 23:53:27.535585 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 4 23:53:27.536910 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 4 23:53:27.537002 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 23:53:27.540881 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 4 23:53:27.541408 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 4 23:53:27.541481 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:53:27.544069 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 4 23:53:27.544139 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:53:27.545164 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:53:27.545224 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:53:27.570190 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 4 23:53:27.571885 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 4 23:53:27.574374 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 4 23:53:27.574719 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 4 23:53:27.576834 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 4 23:53:27.578403 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 4 23:53:27.603472 systemd[1]: Switching root. Nov 4 23:53:27.652694 systemd-journald[295]: Received SIGTERM from PID 1 (systemd). Nov 4 23:53:27.652817 systemd-journald[295]: Journal stopped Nov 4 23:53:29.199289 kernel: SELinux: policy capability network_peer_controls=1 Nov 4 23:53:29.199382 kernel: SELinux: policy capability open_perms=1 Nov 4 23:53:29.199399 kernel: SELinux: policy capability extended_socket_class=1 Nov 4 23:53:29.199413 kernel: SELinux: policy capability always_check_network=0 Nov 4 23:53:29.199426 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 4 23:53:29.199444 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 4 23:53:29.199461 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 4 23:53:29.199474 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 4 23:53:29.199496 kernel: SELinux: policy capability userspace_initial_context=0 Nov 4 23:53:29.199511 kernel: audit: type=1403 audit(1762300407.830:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 4 23:53:29.199527 systemd[1]: Successfully loaded SELinux policy in 95.548ms. Nov 4 23:53:29.199563 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.279ms. Nov 4 23:53:29.199581 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 23:53:29.199596 systemd[1]: Detected virtualization kvm. Nov 4 23:53:29.199611 systemd[1]: Detected architecture x86-64. Nov 4 23:53:29.199625 systemd[1]: Detected first boot. Nov 4 23:53:29.199639 systemd[1]: Hostname set to . Nov 4 23:53:29.199653 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 23:53:29.201749 zram_generator::config[1142]: No configuration found. Nov 4 23:53:29.201786 kernel: Guest personality initialized and is inactive Nov 4 23:53:29.201808 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 4 23:53:29.201828 kernel: Initialized host personality Nov 4 23:53:29.201848 kernel: NET: Registered PF_VSOCK protocol family Nov 4 23:53:29.201871 systemd[1]: Populated /etc with preset unit settings. Nov 4 23:53:29.201888 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 4 23:53:29.201907 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 4 23:53:29.201922 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 4 23:53:29.201937 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 4 23:53:29.201951 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 4 23:53:29.201964 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 4 23:53:29.201978 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 4 23:53:29.201993 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 4 23:53:29.202009 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 4 23:53:29.202024 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 4 23:53:29.202038 systemd[1]: Created slice user.slice - User and Session Slice. Nov 4 23:53:29.202052 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:53:29.202066 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:53:29.202119 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 4 23:53:29.202136 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 4 23:53:29.202163 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 4 23:53:29.202178 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 23:53:29.202191 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 4 23:53:29.202206 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:53:29.202220 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:53:29.202240 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 4 23:53:29.202256 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 4 23:53:29.202269 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 4 23:53:29.202282 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 4 23:53:29.202296 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:53:29.202311 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 23:53:29.202324 systemd[1]: Reached target slices.target - Slice Units. Nov 4 23:53:29.202341 systemd[1]: Reached target swap.target - Swaps. Nov 4 23:53:29.202354 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 4 23:53:29.202385 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 4 23:53:29.202405 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 4 23:53:29.202423 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:53:29.202437 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 23:53:29.202450 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:53:29.202463 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 4 23:53:29.202481 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 4 23:53:29.202495 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 4 23:53:29.202508 systemd[1]: Mounting media.mount - External Media Directory... Nov 4 23:53:29.202523 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:53:29.202537 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 4 23:53:29.202550 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 4 23:53:29.202570 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 4 23:53:29.202584 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 4 23:53:29.202597 systemd[1]: Reached target machines.target - Containers. Nov 4 23:53:29.202611 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 4 23:53:29.202625 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:53:29.202638 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 23:53:29.202652 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 4 23:53:29.202690 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:53:29.202704 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 23:53:29.202718 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:53:29.202731 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 4 23:53:29.202745 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:53:29.202759 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 4 23:53:29.202774 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 4 23:53:29.202793 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 4 23:53:29.202808 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 4 23:53:29.202825 systemd[1]: Stopped systemd-fsck-usr.service. Nov 4 23:53:29.202840 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:53:29.202853 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 23:53:29.202867 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 23:53:29.202881 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 23:53:29.202898 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 4 23:53:29.202911 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 4 23:53:29.202926 kernel: fuse: init (API version 7.41) Nov 4 23:53:29.202945 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 23:53:29.202960 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:53:29.202973 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 4 23:53:29.202987 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 4 23:53:29.203003 systemd[1]: Mounted media.mount - External Media Directory. Nov 4 23:53:29.203019 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 4 23:53:29.203054 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 4 23:53:29.203072 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 4 23:53:29.203086 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:53:29.203100 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:53:29.203114 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:53:29.203128 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:53:29.203147 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:53:29.203160 kernel: ACPI: bus type drm_connector registered Nov 4 23:53:29.203173 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 4 23:53:29.203187 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 4 23:53:29.203200 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 4 23:53:29.203217 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 4 23:53:29.203231 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 23:53:29.203247 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 23:53:29.203261 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:53:29.203275 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:53:29.203290 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 4 23:53:29.203305 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 4 23:53:29.203319 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 23:53:29.203336 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 23:53:29.203350 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:53:29.203421 systemd-journald[1219]: Collecting audit messages is disabled. Nov 4 23:53:29.203466 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 4 23:53:29.203518 systemd-journald[1219]: Journal started Nov 4 23:53:29.203553 systemd-journald[1219]: Runtime Journal (/run/log/journal/1a5d43b29f2c46b499538c4ade7f204f) is 4.9M, max 39.2M, 34.3M free. Nov 4 23:53:28.709850 systemd[1]: Queued start job for default target multi-user.target. Nov 4 23:53:28.734979 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 4 23:53:28.735823 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 4 23:53:29.209718 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 23:53:29.214055 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 4 23:53:29.217220 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 4 23:53:29.218331 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 4 23:53:29.234388 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 4 23:53:29.245260 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 23:53:29.246500 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 4 23:53:29.247296 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 4 23:53:29.247347 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 23:53:29.249791 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 4 23:53:29.250789 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:53:29.254979 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 4 23:53:29.258947 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 4 23:53:29.259811 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:53:29.263657 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 4 23:53:29.267395 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:53:29.275020 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 4 23:53:29.285385 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 4 23:53:29.289419 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:53:29.305538 systemd-journald[1219]: Time spent on flushing to /var/log/journal/1a5d43b29f2c46b499538c4ade7f204f is 76.614ms for 994 entries. Nov 4 23:53:29.305538 systemd-journald[1219]: System Journal (/var/log/journal/1a5d43b29f2c46b499538c4ade7f204f) is 8M, max 163.5M, 155.5M free. Nov 4 23:53:29.397985 systemd-journald[1219]: Received client request to flush runtime journal. Nov 4 23:53:29.398423 kernel: loop1: detected capacity change from 0 to 128048 Nov 4 23:53:29.314893 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 4 23:53:29.316032 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 4 23:53:29.323090 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 4 23:53:29.365354 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:53:29.381765 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 4 23:53:29.387838 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 23:53:29.391153 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 23:53:29.400527 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 4 23:53:29.403958 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 4 23:53:29.415082 kernel: loop2: detected capacity change from 0 to 8 Nov 4 23:53:29.428876 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 4 23:53:29.443693 kernel: loop3: detected capacity change from 0 to 110984 Nov 4 23:53:29.447432 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Nov 4 23:53:29.447951 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Nov 4 23:53:29.454547 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:53:29.485688 kernel: loop4: detected capacity change from 0 to 219144 Nov 4 23:53:29.504265 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 4 23:53:29.515688 kernel: loop5: detected capacity change from 0 to 128048 Nov 4 23:53:29.534695 kernel: loop6: detected capacity change from 0 to 8 Nov 4 23:53:29.544702 kernel: loop7: detected capacity change from 0 to 110984 Nov 4 23:53:29.566699 kernel: loop1: detected capacity change from 0 to 219144 Nov 4 23:53:29.585752 systemd-resolved[1280]: Positive Trust Anchors: Nov 4 23:53:29.585772 systemd-resolved[1280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 23:53:29.585777 systemd-resolved[1280]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 23:53:29.585814 systemd-resolved[1280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 23:53:29.586565 (sd-merge)[1296]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-digitalocean.raw'. Nov 4 23:53:29.593257 (sd-merge)[1296]: Merged extensions into '/usr'. Nov 4 23:53:29.601917 systemd[1]: Reload requested from client PID 1270 ('systemd-sysext') (unit systemd-sysext.service)... Nov 4 23:53:29.601943 systemd[1]: Reloading... Nov 4 23:53:29.609260 systemd-resolved[1280]: Using system hostname 'ci-4487.0.0-n-b9f348caa0'. Nov 4 23:53:29.740708 zram_generator::config[1326]: No configuration found. Nov 4 23:53:30.110214 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 4 23:53:30.110449 systemd[1]: Reloading finished in 507 ms. Nov 4 23:53:30.125097 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 23:53:30.127864 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:53:30.129118 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 4 23:53:30.140905 systemd[1]: Starting ensure-sysext.service... Nov 4 23:53:30.145000 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 23:53:30.187281 systemd[1]: Reload requested from client PID 1368 ('systemctl') (unit ensure-sysext.service)... Nov 4 23:53:30.187305 systemd[1]: Reloading... Nov 4 23:53:30.199789 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 4 23:53:30.199845 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 4 23:53:30.200167 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 4 23:53:30.200480 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 4 23:53:30.201840 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 4 23:53:30.202173 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. Nov 4 23:53:30.202233 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. Nov 4 23:53:30.209210 systemd-tmpfiles[1369]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 23:53:30.209226 systemd-tmpfiles[1369]: Skipping /boot Nov 4 23:53:30.223952 systemd-tmpfiles[1369]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 23:53:30.223969 systemd-tmpfiles[1369]: Skipping /boot Nov 4 23:53:30.345709 zram_generator::config[1399]: No configuration found. Nov 4 23:53:30.612706 systemd[1]: Reloading finished in 424 ms. Nov 4 23:53:30.626100 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 4 23:53:30.638899 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:53:30.650186 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 23:53:30.653991 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 4 23:53:30.657068 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 4 23:53:30.665932 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 4 23:53:30.669219 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:53:30.673224 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 4 23:53:30.678424 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:53:30.679739 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:53:30.683128 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:53:30.698936 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:53:30.702214 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:53:30.703482 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:53:30.703627 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:53:30.703756 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:53:30.709312 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:53:30.709553 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:53:30.709762 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:53:30.709860 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:53:30.709993 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:53:30.716533 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:53:30.717863 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:53:30.722300 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 23:53:30.723922 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:53:30.724093 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:53:30.724234 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:53:30.728917 systemd[1]: Finished ensure-sysext.service. Nov 4 23:53:30.749445 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 4 23:53:30.787089 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:53:30.787373 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:53:30.803853 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 4 23:53:30.805118 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 23:53:30.805344 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 23:53:30.808867 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:53:30.809068 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:53:30.816510 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:53:30.821001 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:53:30.821209 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:53:30.822214 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 23:53:30.866287 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 4 23:53:30.884053 systemd-udevd[1448]: Using default interface naming scheme 'v257'. Nov 4 23:53:30.896680 augenrules[1483]: No rules Nov 4 23:53:30.895985 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 23:53:30.897382 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 23:53:30.899193 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 4 23:53:30.903099 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 4 23:53:30.950053 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 4 23:53:30.951196 systemd[1]: Reached target time-set.target - System Time Set. Nov 4 23:53:30.975060 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:53:30.985342 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 23:53:31.127059 systemd-networkd[1493]: lo: Link UP Nov 4 23:53:31.127071 systemd-networkd[1493]: lo: Gained carrier Nov 4 23:53:31.131106 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 23:53:31.136124 systemd[1]: Reached target network.target - Network. Nov 4 23:53:31.142037 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 4 23:53:31.149182 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 4 23:53:31.199612 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 4 23:53:31.264320 systemd-networkd[1493]: eth1: Configuring with /run/systemd/network/10-8a:41:50:3b:bf:ef.network. Nov 4 23:53:31.267822 systemd-networkd[1493]: eth1: Link UP Nov 4 23:53:31.268225 systemd-networkd[1493]: eth1: Gained carrier Nov 4 23:53:31.281905 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Nov 4 23:53:31.311437 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Nov 4 23:53:31.312143 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 4 23:53:31.317055 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Nov 4 23:53:31.317924 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:53:31.318131 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:53:31.320941 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:53:31.325049 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:53:31.330568 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:53:31.331403 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:53:31.331453 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:53:31.331530 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 4 23:53:31.331553 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:53:31.386369 kernel: ISO 9660 Extensions: RRIP_1991A Nov 4 23:53:31.385718 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Nov 4 23:53:31.405886 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:53:31.409926 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:53:31.426652 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:53:31.427607 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:53:31.430840 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:53:31.431113 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:53:31.434637 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:53:31.435602 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 23:53:31.485282 systemd-networkd[1493]: eth0: Configuring with /run/systemd/network/10-9e:e9:07:f8:39:4e.network. Nov 4 23:53:31.492179 systemd-networkd[1493]: eth0: Link UP Nov 4 23:53:31.498883 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Nov 4 23:53:31.501289 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Nov 4 23:53:31.503244 systemd-networkd[1493]: eth0: Gained carrier Nov 4 23:53:31.504250 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 23:53:31.507388 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 4 23:53:31.508903 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Nov 4 23:53:31.510849 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Nov 4 23:53:31.547890 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 4 23:53:31.575712 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 4 23:53:31.583700 kernel: ACPI: button: Power Button [PWRF] Nov 4 23:53:31.602723 kernel: mousedev: PS/2 mouse device common for all mice Nov 4 23:53:31.605281 ldconfig[1446]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 4 23:53:31.612001 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 4 23:53:31.615857 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 4 23:53:31.648775 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 4 23:53:31.650927 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 23:53:31.652034 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 4 23:53:31.654153 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 4 23:53:31.655802 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 4 23:53:31.656694 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 4 23:53:31.658133 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 4 23:53:31.659709 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 4 23:53:31.659983 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 4 23:53:31.661826 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 4 23:53:31.663798 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 4 23:53:31.663858 systemd[1]: Reached target paths.target - Path Units. Nov 4 23:53:31.664750 systemd[1]: Reached target timers.target - Timer Units. Nov 4 23:53:31.668874 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 4 23:53:31.681823 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 4 23:53:31.686765 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 4 23:53:31.688871 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 4 23:53:31.689385 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 4 23:53:31.700523 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 4 23:53:31.703844 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 4 23:53:31.705202 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 4 23:53:31.708552 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 23:53:31.709747 systemd[1]: Reached target basic.target - Basic System. Nov 4 23:53:31.710810 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 4 23:53:31.710841 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 4 23:53:31.713909 systemd[1]: Starting containerd.service - containerd container runtime... Nov 4 23:53:31.718858 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 4 23:53:31.728105 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 4 23:53:31.733011 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 4 23:53:31.736944 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 4 23:53:31.745106 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 4 23:53:31.745715 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 4 23:53:31.752980 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 4 23:53:31.760005 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 4 23:53:31.771199 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 4 23:53:31.771849 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Refreshing passwd entry cache Nov 4 23:53:31.772154 oslogin_cache_refresh[1563]: Refreshing passwd entry cache Nov 4 23:53:31.775555 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Failure getting users, quitting Nov 4 23:53:31.775707 oslogin_cache_refresh[1563]: Failure getting users, quitting Nov 4 23:53:31.775809 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 23:53:31.775842 oslogin_cache_refresh[1563]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 23:53:31.775929 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Refreshing group entry cache Nov 4 23:53:31.775958 oslogin_cache_refresh[1563]: Refreshing group entry cache Nov 4 23:53:31.776970 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Failure getting groups, quitting Nov 4 23:53:31.777039 oslogin_cache_refresh[1563]: Failure getting groups, quitting Nov 4 23:53:31.777090 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 23:53:31.777117 oslogin_cache_refresh[1563]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 23:53:31.778308 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 4 23:53:31.793123 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 4 23:53:31.808076 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 4 23:53:31.809385 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 4 23:53:31.810030 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 4 23:53:31.811996 systemd[1]: Starting update-engine.service - Update Engine... Nov 4 23:53:31.816862 jq[1561]: false Nov 4 23:53:31.825159 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 4 23:53:31.830743 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 4 23:53:31.831643 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 4 23:53:31.833030 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 4 23:53:31.833472 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 4 23:53:31.833860 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 4 23:53:31.835434 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 4 23:53:31.835656 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 4 23:53:31.880485 jq[1573]: true Nov 4 23:53:31.892799 extend-filesystems[1562]: Found /dev/vda6 Nov 4 23:53:31.895119 update_engine[1571]: I20251104 23:53:31.890187 1571 main.cc:92] Flatcar Update Engine starting Nov 4 23:53:31.910683 extend-filesystems[1562]: Found /dev/vda9 Nov 4 23:53:31.909988 (ntainerd)[1598]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 4 23:53:31.933549 extend-filesystems[1562]: Checking size of /dev/vda9 Nov 4 23:53:31.935980 coreos-metadata[1558]: Nov 04 23:53:31.934 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 4 23:53:31.949269 tar[1579]: linux-amd64/LICENSE Nov 4 23:53:31.952815 tar[1579]: linux-amd64/helm Nov 4 23:53:31.957620 dbus-daemon[1559]: [system] SELinux support is enabled Nov 4 23:53:31.958607 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 4 23:53:31.963736 coreos-metadata[1558]: Nov 04 23:53:31.960 INFO Fetch successful Nov 4 23:53:31.964733 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 4 23:53:31.964785 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 4 23:53:31.965606 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 4 23:53:31.965775 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Nov 4 23:53:31.965802 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 4 23:53:31.970986 jq[1593]: true Nov 4 23:53:31.993201 extend-filesystems[1562]: Resized partition /dev/vda9 Nov 4 23:53:31.993197 systemd[1]: Started update-engine.service - Update Engine. Nov 4 23:53:31.995695 update_engine[1571]: I20251104 23:53:31.995277 1571 update_check_scheduler.cc:74] Next update check in 7m51s Nov 4 23:53:32.000534 systemd[1]: motdgen.service: Deactivated successfully. Nov 4 23:53:32.005046 extend-filesystems[1609]: resize2fs 1.47.3 (8-Jul-2025) Nov 4 23:53:32.000936 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 4 23:53:32.023329 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 14138363 blocks Nov 4 23:53:32.028693 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 4 23:53:32.082897 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 4 23:53:32.083813 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 4 23:53:32.158730 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Nov 4 23:53:32.161691 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Nov 4 23:53:32.170689 kernel: Console: switching to colour dummy device 80x25 Nov 4 23:53:32.172469 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 4 23:53:32.172548 kernel: [drm] features: -context_init Nov 4 23:53:32.174883 kernel: [drm] number of scanouts: 1 Nov 4 23:53:32.174982 kernel: [drm] number of cap sets: 0 Nov 4 23:53:32.177696 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Nov 4 23:53:32.188312 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 4 23:53:32.188423 kernel: Console: switching to colour frame buffer device 128x48 Nov 4 23:53:32.195646 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 4 23:53:32.218117 bash[1634]: Updated "/home/core/.ssh/authorized_keys" Nov 4 23:53:32.219440 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 4 23:53:32.231027 systemd[1]: Starting sshkeys.service... Nov 4 23:53:32.254615 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Nov 4 23:53:32.288136 extend-filesystems[1609]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 4 23:53:32.288136 extend-filesystems[1609]: old_desc_blocks = 1, new_desc_blocks = 7 Nov 4 23:53:32.288136 extend-filesystems[1609]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Nov 4 23:53:32.289963 extend-filesystems[1562]: Resized filesystem in /dev/vda9 Nov 4 23:53:32.291492 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 4 23:53:32.291735 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 4 23:53:32.340126 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 4 23:53:32.346941 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 4 23:53:32.434821 containerd[1598]: time="2025-11-04T23:53:32Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 4 23:53:32.434821 containerd[1598]: time="2025-11-04T23:53:32.434233312Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 4 23:53:32.455238 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:53:32.465283 containerd[1598]: time="2025-11-04T23:53:32.464032351Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.344µs" Nov 4 23:53:32.465283 containerd[1598]: time="2025-11-04T23:53:32.464071458Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 4 23:53:32.465283 containerd[1598]: time="2025-11-04T23:53:32.464111092Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 4 23:53:32.465283 containerd[1598]: time="2025-11-04T23:53:32.464326974Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 4 23:53:32.465283 containerd[1598]: time="2025-11-04T23:53:32.464347254Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 4 23:53:32.465283 containerd[1598]: time="2025-11-04T23:53:32.464377547Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 23:53:32.465283 containerd[1598]: time="2025-11-04T23:53:32.464438233Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 23:53:32.465283 containerd[1598]: time="2025-11-04T23:53:32.464455899Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 23:53:32.468116 containerd[1598]: time="2025-11-04T23:53:32.466924152Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 23:53:32.468116 containerd[1598]: time="2025-11-04T23:53:32.467062652Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 23:53:32.468116 containerd[1598]: time="2025-11-04T23:53:32.467096728Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 23:53:32.468116 containerd[1598]: time="2025-11-04T23:53:32.467222177Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 4 23:53:32.468116 containerd[1598]: time="2025-11-04T23:53:32.467510459Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 4 23:53:32.468116 containerd[1598]: time="2025-11-04T23:53:32.467886063Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 23:53:32.468116 containerd[1598]: time="2025-11-04T23:53:32.467931005Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 23:53:32.468116 containerd[1598]: time="2025-11-04T23:53:32.467946042Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 4 23:53:32.468116 containerd[1598]: time="2025-11-04T23:53:32.467982165Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 4 23:53:32.468418 containerd[1598]: time="2025-11-04T23:53:32.468302823Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 4 23:53:32.468418 containerd[1598]: time="2025-11-04T23:53:32.468381951Z" level=info msg="metadata content store policy set" policy=shared Nov 4 23:53:32.471075 systemd-logind[1570]: New seat seat0. Nov 4 23:53:32.474809 containerd[1598]: time="2025-11-04T23:53:32.473732148Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 4 23:53:32.474809 containerd[1598]: time="2025-11-04T23:53:32.473814897Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 4 23:53:32.474809 containerd[1598]: time="2025-11-04T23:53:32.473837184Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 4 23:53:32.474809 containerd[1598]: time="2025-11-04T23:53:32.473852809Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 4 23:53:32.474809 containerd[1598]: time="2025-11-04T23:53:32.473870575Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 4 23:53:32.474809 containerd[1598]: time="2025-11-04T23:53:32.473887861Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 4 23:53:32.474809 containerd[1598]: time="2025-11-04T23:53:32.473903680Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 4 23:53:32.474809 containerd[1598]: time="2025-11-04T23:53:32.473919523Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 4 23:53:32.474809 containerd[1598]: time="2025-11-04T23:53:32.473937007Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 4 23:53:32.474809 containerd[1598]: time="2025-11-04T23:53:32.473972994Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 4 23:53:32.474809 containerd[1598]: time="2025-11-04T23:53:32.473986459Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 4 23:53:32.474809 containerd[1598]: time="2025-11-04T23:53:32.474003381Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 4 23:53:32.474809 containerd[1598]: time="2025-11-04T23:53:32.474181197Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 4 23:53:32.474809 containerd[1598]: time="2025-11-04T23:53:32.474211355Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 4 23:53:32.475199 containerd[1598]: time="2025-11-04T23:53:32.474234508Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 4 23:53:32.475199 containerd[1598]: time="2025-11-04T23:53:32.474291028Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 4 23:53:32.475199 containerd[1598]: time="2025-11-04T23:53:32.474310706Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 4 23:53:32.475199 containerd[1598]: time="2025-11-04T23:53:32.474325418Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 4 23:53:32.475199 containerd[1598]: time="2025-11-04T23:53:32.474340563Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 4 23:53:32.475199 containerd[1598]: time="2025-11-04T23:53:32.474355171Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 4 23:53:32.475199 containerd[1598]: time="2025-11-04T23:53:32.474371119Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 4 23:53:32.475199 containerd[1598]: time="2025-11-04T23:53:32.474382656Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 4 23:53:32.475199 containerd[1598]: time="2025-11-04T23:53:32.474395597Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 4 23:53:32.475199 containerd[1598]: time="2025-11-04T23:53:32.474479236Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 4 23:53:32.475199 containerd[1598]: time="2025-11-04T23:53:32.474498587Z" level=info msg="Start snapshots syncer" Nov 4 23:53:32.475199 containerd[1598]: time="2025-11-04T23:53:32.474530571Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 4 23:53:32.481401 containerd[1598]: time="2025-11-04T23:53:32.476982481Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 4 23:53:32.481401 containerd[1598]: time="2025-11-04T23:53:32.477098343Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 4 23:53:32.481736 containerd[1598]: time="2025-11-04T23:53:32.480285506Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 4 23:53:32.481736 containerd[1598]: time="2025-11-04T23:53:32.480501533Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 4 23:53:32.481736 containerd[1598]: time="2025-11-04T23:53:32.480556122Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 4 23:53:32.481736 containerd[1598]: time="2025-11-04T23:53:32.480571797Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 4 23:53:32.481736 containerd[1598]: time="2025-11-04T23:53:32.480583129Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 4 23:53:32.481736 containerd[1598]: time="2025-11-04T23:53:32.480644811Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 4 23:53:32.482818 containerd[1598]: time="2025-11-04T23:53:32.482424472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 4 23:53:32.482818 containerd[1598]: time="2025-11-04T23:53:32.482464849Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 4 23:53:32.482818 containerd[1598]: time="2025-11-04T23:53:32.482497555Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 4 23:53:32.482818 containerd[1598]: time="2025-11-04T23:53:32.482510792Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 4 23:53:32.482818 containerd[1598]: time="2025-11-04T23:53:32.482523110Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 4 23:53:32.482818 containerd[1598]: time="2025-11-04T23:53:32.482646706Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 23:53:32.483010 containerd[1598]: time="2025-11-04T23:53:32.482947966Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 23:53:32.483010 containerd[1598]: time="2025-11-04T23:53:32.482965413Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 23:53:32.483010 containerd[1598]: time="2025-11-04T23:53:32.482981716Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 23:53:32.483010 containerd[1598]: time="2025-11-04T23:53:32.482989946Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 4 23:53:32.483010 containerd[1598]: time="2025-11-04T23:53:32.483002713Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 4 23:53:32.483127 containerd[1598]: time="2025-11-04T23:53:32.483014107Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 4 23:53:32.483127 containerd[1598]: time="2025-11-04T23:53:32.483031703Z" level=info msg="runtime interface created" Nov 4 23:53:32.483127 containerd[1598]: time="2025-11-04T23:53:32.483036849Z" level=info msg="created NRI interface" Nov 4 23:53:32.483127 containerd[1598]: time="2025-11-04T23:53:32.483058374Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 4 23:53:32.483127 containerd[1598]: time="2025-11-04T23:53:32.483078640Z" level=info msg="Connect containerd service" Nov 4 23:53:32.483226 containerd[1598]: time="2025-11-04T23:53:32.483163300Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 4 23:53:32.485780 systemd[1]: Started systemd-logind.service - User Login Management. Nov 4 23:53:32.489046 containerd[1598]: time="2025-11-04T23:53:32.488213384Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 23:53:32.548792 coreos-metadata[1644]: Nov 04 23:53:32.548 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 4 23:53:32.566723 coreos-metadata[1644]: Nov 04 23:53:32.566 INFO Fetch successful Nov 4 23:53:32.604595 unknown[1644]: wrote ssh authorized keys file for user: core Nov 4 23:53:32.617844 systemd-networkd[1493]: eth1: Gained IPv6LL Nov 4 23:53:32.618701 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Nov 4 23:53:32.626099 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 4 23:53:32.630003 systemd[1]: Reached target network-online.target - Network is Online. Nov 4 23:53:32.632789 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:53:32.636307 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 4 23:53:32.669268 locksmithd[1611]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 4 23:53:32.684503 update-ssh-keys[1661]: Updated "/home/core/.ssh/authorized_keys" Nov 4 23:53:32.681817 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 4 23:53:32.686909 systemd[1]: Finished sshkeys.service. Nov 4 23:53:32.754790 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 4 23:53:32.787892 systemd-logind[1570]: Watching system buttons on /dev/input/event2 (Power Button) Nov 4 23:53:32.792103 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:53:32.797224 systemd-logind[1570]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 4 23:53:32.854082 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:53:32.855002 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:53:32.859915 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:53:32.864391 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:53:32.865354 containerd[1598]: time="2025-11-04T23:53:32.865315517Z" level=info msg="Start subscribing containerd event" Nov 4 23:53:32.865494 containerd[1598]: time="2025-11-04T23:53:32.865467710Z" level=info msg="Start recovering state" Nov 4 23:53:32.865708 containerd[1598]: time="2025-11-04T23:53:32.865588163Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 4 23:53:32.865708 containerd[1598]: time="2025-11-04T23:53:32.865632205Z" level=info msg="Start event monitor" Nov 4 23:53:32.865708 containerd[1598]: time="2025-11-04T23:53:32.865648685Z" level=info msg="Start cni network conf syncer for default" Nov 4 23:53:32.866112 containerd[1598]: time="2025-11-04T23:53:32.865657259Z" level=info msg="Start streaming server" Nov 4 23:53:32.866112 containerd[1598]: time="2025-11-04T23:53:32.865813311Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 4 23:53:32.866112 containerd[1598]: time="2025-11-04T23:53:32.865821985Z" level=info msg="runtime interface starting up..." Nov 4 23:53:32.866112 containerd[1598]: time="2025-11-04T23:53:32.865827980Z" level=info msg="starting plugins..." Nov 4 23:53:32.866112 containerd[1598]: time="2025-11-04T23:53:32.865841266Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 4 23:53:32.875831 containerd[1598]: time="2025-11-04T23:53:32.875755739Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 4 23:53:32.877688 containerd[1598]: time="2025-11-04T23:53:32.876819335Z" level=info msg="containerd successfully booted in 0.443554s" Nov 4 23:53:32.879600 systemd[1]: Started containerd.service - containerd container runtime. Nov 4 23:53:32.937964 systemd-networkd[1493]: eth0: Gained IPv6LL Nov 4 23:53:32.946912 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Nov 4 23:53:33.102840 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:53:33.104041 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:53:33.130928 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:53:33.272534 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:53:33.320167 kernel: EDAC MC: Ver: 3.0.0 Nov 4 23:53:33.491827 sshd_keygen[1588]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 4 23:53:33.514486 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 4 23:53:33.567362 tar[1579]: linux-amd64/README.md Nov 4 23:53:33.583357 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 4 23:53:33.593797 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 4 23:53:33.596499 systemd[1]: Started sshd@0-137.184.235.85:22-139.178.89.65:45822.service - OpenSSH per-connection server daemon (139.178.89.65:45822). Nov 4 23:53:33.599398 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 4 23:53:33.637156 systemd[1]: issuegen.service: Deactivated successfully. Nov 4 23:53:33.637492 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 4 23:53:33.644178 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 4 23:53:33.676701 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 4 23:53:33.683120 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 4 23:53:33.689334 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 4 23:53:33.691766 systemd[1]: Reached target getty.target - Login Prompts. Nov 4 23:53:33.760862 sshd[1717]: Accepted publickey for core from 139.178.89.65 port 45822 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:53:33.763693 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:33.776444 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 4 23:53:33.782774 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 4 23:53:33.797689 systemd-logind[1570]: New session 1 of user core. Nov 4 23:53:33.823945 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 4 23:53:33.832108 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 4 23:53:33.852561 (systemd)[1731]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 4 23:53:33.858429 systemd-logind[1570]: New session c1 of user core. Nov 4 23:53:34.026303 systemd[1731]: Queued start job for default target default.target. Nov 4 23:53:34.032559 systemd[1731]: Created slice app.slice - User Application Slice. Nov 4 23:53:34.032607 systemd[1731]: Reached target paths.target - Paths. Nov 4 23:53:34.032658 systemd[1731]: Reached target timers.target - Timers. Nov 4 23:53:34.037870 systemd[1731]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 4 23:53:34.060275 systemd[1731]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 4 23:53:34.060566 systemd[1731]: Reached target sockets.target - Sockets. Nov 4 23:53:34.060775 systemd[1731]: Reached target basic.target - Basic System. Nov 4 23:53:34.060898 systemd[1731]: Reached target default.target - Main User Target. Nov 4 23:53:34.060941 systemd[1731]: Startup finished in 190ms. Nov 4 23:53:34.061951 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 4 23:53:34.076030 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 4 23:53:34.151346 systemd[1]: Started sshd@1-137.184.235.85:22-139.178.89.65:45834.service - OpenSSH per-connection server daemon (139.178.89.65:45834). Nov 4 23:53:34.258439 sshd[1742]: Accepted publickey for core from 139.178.89.65 port 45834 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:53:34.260759 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:34.268826 systemd-logind[1570]: New session 2 of user core. Nov 4 23:53:34.282024 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 4 23:53:34.356128 sshd[1745]: Connection closed by 139.178.89.65 port 45834 Nov 4 23:53:34.359278 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:34.371767 systemd[1]: sshd@1-137.184.235.85:22-139.178.89.65:45834.service: Deactivated successfully. Nov 4 23:53:34.375096 systemd[1]: session-2.scope: Deactivated successfully. Nov 4 23:53:34.378947 systemd-logind[1570]: Session 2 logged out. Waiting for processes to exit. Nov 4 23:53:34.384221 systemd[1]: Started sshd@2-137.184.235.85:22-139.178.89.65:45848.service - OpenSSH per-connection server daemon (139.178.89.65:45848). Nov 4 23:53:34.388739 systemd-logind[1570]: Removed session 2. Nov 4 23:53:34.469588 sshd[1752]: Accepted publickey for core from 139.178.89.65 port 45848 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:53:34.471482 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:34.485512 systemd-logind[1570]: New session 3 of user core. Nov 4 23:53:34.493024 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 4 23:53:34.496547 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:53:34.501947 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 4 23:53:34.508060 systemd[1]: Startup finished in 2.519s (kernel) + 6.068s (initrd) + 6.770s (userspace) = 15.358s. Nov 4 23:53:34.513640 (kubelet)[1759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:53:34.570738 sshd[1761]: Connection closed by 139.178.89.65 port 45848 Nov 4 23:53:34.571451 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:34.576699 systemd-logind[1570]: Session 3 logged out. Waiting for processes to exit. Nov 4 23:53:34.577487 systemd[1]: sshd@2-137.184.235.85:22-139.178.89.65:45848.service: Deactivated successfully. Nov 4 23:53:34.581158 systemd[1]: session-3.scope: Deactivated successfully. Nov 4 23:53:34.585392 systemd-logind[1570]: Removed session 3. Nov 4 23:53:35.166203 kubelet[1759]: E1104 23:53:35.166067 1759 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:53:35.168964 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:53:35.169410 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:53:35.170226 systemd[1]: kubelet.service: Consumed 1.294s CPU time, 257.8M memory peak. Nov 4 23:53:44.591798 systemd[1]: Started sshd@3-137.184.235.85:22-139.178.89.65:42586.service - OpenSSH per-connection server daemon (139.178.89.65:42586). Nov 4 23:53:44.690451 sshd[1777]: Accepted publickey for core from 139.178.89.65 port 42586 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:53:44.692212 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:44.698298 systemd-logind[1570]: New session 4 of user core. Nov 4 23:53:44.708997 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 4 23:53:44.774764 sshd[1780]: Connection closed by 139.178.89.65 port 42586 Nov 4 23:53:44.775614 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:44.788419 systemd[1]: sshd@3-137.184.235.85:22-139.178.89.65:42586.service: Deactivated successfully. Nov 4 23:53:44.790944 systemd[1]: session-4.scope: Deactivated successfully. Nov 4 23:53:44.792455 systemd-logind[1570]: Session 4 logged out. Waiting for processes to exit. Nov 4 23:53:44.796570 systemd[1]: Started sshd@4-137.184.235.85:22-139.178.89.65:42600.service - OpenSSH per-connection server daemon (139.178.89.65:42600). Nov 4 23:53:44.799763 systemd-logind[1570]: Removed session 4. Nov 4 23:53:44.882752 sshd[1786]: Accepted publickey for core from 139.178.89.65 port 42600 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:53:44.884024 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:44.892298 systemd-logind[1570]: New session 5 of user core. Nov 4 23:53:44.902086 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 4 23:53:44.962549 sshd[1789]: Connection closed by 139.178.89.65 port 42600 Nov 4 23:53:44.962408 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:44.980950 systemd[1]: sshd@4-137.184.235.85:22-139.178.89.65:42600.service: Deactivated successfully. Nov 4 23:53:44.983165 systemd[1]: session-5.scope: Deactivated successfully. Nov 4 23:53:44.984112 systemd-logind[1570]: Session 5 logged out. Waiting for processes to exit. Nov 4 23:53:44.987843 systemd[1]: Started sshd@5-137.184.235.85:22-139.178.89.65:42612.service - OpenSSH per-connection server daemon (139.178.89.65:42612). Nov 4 23:53:44.989257 systemd-logind[1570]: Removed session 5. Nov 4 23:53:45.066701 sshd[1795]: Accepted publickey for core from 139.178.89.65 port 42612 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:53:45.069055 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:45.074857 systemd-logind[1570]: New session 6 of user core. Nov 4 23:53:45.082998 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 4 23:53:45.146961 sshd[1798]: Connection closed by 139.178.89.65 port 42612 Nov 4 23:53:45.147775 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:45.159914 systemd[1]: sshd@5-137.184.235.85:22-139.178.89.65:42612.service: Deactivated successfully. Nov 4 23:53:45.163635 systemd[1]: session-6.scope: Deactivated successfully. Nov 4 23:53:45.164921 systemd-logind[1570]: Session 6 logged out. Waiting for processes to exit. Nov 4 23:53:45.169761 systemd[1]: Started sshd@6-137.184.235.85:22-139.178.89.65:42614.service - OpenSSH per-connection server daemon (139.178.89.65:42614). Nov 4 23:53:45.171207 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 4 23:53:45.172026 systemd-logind[1570]: Removed session 6. Nov 4 23:53:45.176004 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:53:45.248176 sshd[1804]: Accepted publickey for core from 139.178.89.65 port 42614 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:53:45.250417 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:45.259455 systemd-logind[1570]: New session 7 of user core. Nov 4 23:53:45.271958 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 4 23:53:45.352063 sudo[1811]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 4 23:53:45.353088 sudo[1811]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:53:45.367095 sudo[1811]: pam_unix(sudo:session): session closed for user root Nov 4 23:53:45.372826 sshd[1810]: Connection closed by 139.178.89.65 port 42614 Nov 4 23:53:45.373622 sshd-session[1804]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:45.384580 systemd[1]: sshd@6-137.184.235.85:22-139.178.89.65:42614.service: Deactivated successfully. Nov 4 23:53:45.388260 systemd[1]: session-7.scope: Deactivated successfully. Nov 4 23:53:45.390938 systemd-logind[1570]: Session 7 logged out. Waiting for processes to exit. Nov 4 23:53:45.397175 systemd[1]: Started sshd@7-137.184.235.85:22-139.178.89.65:42620.service - OpenSSH per-connection server daemon (139.178.89.65:42620). Nov 4 23:53:45.400411 systemd-logind[1570]: Removed session 7. Nov 4 23:53:45.409976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:53:45.424459 (kubelet)[1823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:53:45.471611 sshd[1821]: Accepted publickey for core from 139.178.89.65 port 42620 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:53:45.474910 sshd-session[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:45.488751 systemd-logind[1570]: New session 8 of user core. Nov 4 23:53:45.492960 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 4 23:53:45.513812 kubelet[1823]: E1104 23:53:45.513715 1823 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:53:45.519590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:53:45.519821 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:53:45.520376 systemd[1]: kubelet.service: Consumed 246ms CPU time, 110.5M memory peak. Nov 4 23:53:45.562804 sudo[1834]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 4 23:53:45.563302 sudo[1834]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:53:45.570765 sudo[1834]: pam_unix(sudo:session): session closed for user root Nov 4 23:53:45.580063 sudo[1833]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 4 23:53:45.580376 sudo[1833]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:53:45.597369 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 23:53:45.659773 augenrules[1856]: No rules Nov 4 23:53:45.660527 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 23:53:45.661084 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 23:53:45.662726 sudo[1833]: pam_unix(sudo:session): session closed for user root Nov 4 23:53:45.667806 sshd[1831]: Connection closed by 139.178.89.65 port 42620 Nov 4 23:53:45.667881 sshd-session[1821]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:45.690576 systemd[1]: sshd@7-137.184.235.85:22-139.178.89.65:42620.service: Deactivated successfully. Nov 4 23:53:45.693008 systemd[1]: session-8.scope: Deactivated successfully. Nov 4 23:53:45.694071 systemd-logind[1570]: Session 8 logged out. Waiting for processes to exit. Nov 4 23:53:45.698050 systemd[1]: Started sshd@8-137.184.235.85:22-139.178.89.65:42622.service - OpenSSH per-connection server daemon (139.178.89.65:42622). Nov 4 23:53:45.700768 systemd-logind[1570]: Removed session 8. Nov 4 23:53:45.771482 sshd[1865]: Accepted publickey for core from 139.178.89.65 port 42622 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:53:45.773071 sshd-session[1865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:45.779412 systemd-logind[1570]: New session 9 of user core. Nov 4 23:53:45.788008 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 4 23:53:45.853154 sudo[1869]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 4 23:53:45.853984 sudo[1869]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:53:46.436231 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 4 23:53:46.451271 (dockerd)[1886]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 4 23:53:46.911360 dockerd[1886]: time="2025-11-04T23:53:46.910930369Z" level=info msg="Starting up" Nov 4 23:53:46.913470 dockerd[1886]: time="2025-11-04T23:53:46.913219044Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 4 23:53:46.934862 dockerd[1886]: time="2025-11-04T23:53:46.934805503Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 4 23:53:46.975943 dockerd[1886]: time="2025-11-04T23:53:46.975876233Z" level=info msg="Loading containers: start." Nov 4 23:53:46.991791 kernel: Initializing XFRM netlink socket Nov 4 23:53:47.255987 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Nov 4 23:53:47.317634 systemd-networkd[1493]: docker0: Link UP Nov 4 23:53:47.322099 dockerd[1886]: time="2025-11-04T23:53:47.321973493Z" level=info msg="Loading containers: done." Nov 4 23:53:47.331929 systemd-timesyncd[1460]: Contacted time server 50.117.3.95:123 (2.flatcar.pool.ntp.org). Nov 4 23:53:47.332911 systemd-timesyncd[1460]: Initial clock synchronization to Tue 2025-11-04 23:53:47.603421 UTC. Nov 4 23:53:47.345540 dockerd[1886]: time="2025-11-04T23:53:47.345121601Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 4 23:53:47.345540 dockerd[1886]: time="2025-11-04T23:53:47.345244884Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 4 23:53:47.345540 dockerd[1886]: time="2025-11-04T23:53:47.345361066Z" level=info msg="Initializing buildkit" Nov 4 23:53:47.369152 dockerd[1886]: time="2025-11-04T23:53:47.369097264Z" level=info msg="Completed buildkit initialization" Nov 4 23:53:47.376397 dockerd[1886]: time="2025-11-04T23:53:47.376325400Z" level=info msg="Daemon has completed initialization" Nov 4 23:53:47.376550 dockerd[1886]: time="2025-11-04T23:53:47.376433122Z" level=info msg="API listen on /run/docker.sock" Nov 4 23:53:47.376958 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 4 23:53:48.244197 containerd[1598]: time="2025-11-04T23:53:48.244153709Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 4 23:53:49.400915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount531834660.mount: Deactivated successfully. Nov 4 23:53:50.635727 containerd[1598]: time="2025-11-04T23:53:50.634721251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:50.635727 containerd[1598]: time="2025-11-04T23:53:50.635586530Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Nov 4 23:53:50.637048 containerd[1598]: time="2025-11-04T23:53:50.636999979Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:50.641083 containerd[1598]: time="2025-11-04T23:53:50.641017424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:50.641932 containerd[1598]: time="2025-11-04T23:53:50.641879179Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 2.397684163s" Nov 4 23:53:50.642046 containerd[1598]: time="2025-11-04T23:53:50.641940636Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 4 23:53:50.642775 containerd[1598]: time="2025-11-04T23:53:50.642708524Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 4 23:53:52.713037 containerd[1598]: time="2025-11-04T23:53:52.711457843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:52.713037 containerd[1598]: time="2025-11-04T23:53:52.712598410Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Nov 4 23:53:52.713037 containerd[1598]: time="2025-11-04T23:53:52.712960717Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:52.716263 containerd[1598]: time="2025-11-04T23:53:52.716204256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:52.717953 containerd[1598]: time="2025-11-04T23:53:52.717898344Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 2.07514876s" Nov 4 23:53:52.718175 containerd[1598]: time="2025-11-04T23:53:52.718151388Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 4 23:53:52.718978 containerd[1598]: time="2025-11-04T23:53:52.718932958Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 4 23:53:53.619038 systemd-resolved[1280]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Nov 4 23:53:54.416387 containerd[1598]: time="2025-11-04T23:53:54.416321365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:54.417562 containerd[1598]: time="2025-11-04T23:53:54.417378296Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Nov 4 23:53:54.418395 containerd[1598]: time="2025-11-04T23:53:54.418353566Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:54.420841 containerd[1598]: time="2025-11-04T23:53:54.420808277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:54.422267 containerd[1598]: time="2025-11-04T23:53:54.422228830Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.702947936s" Nov 4 23:53:54.422441 containerd[1598]: time="2025-11-04T23:53:54.422423553Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 4 23:53:54.423429 containerd[1598]: time="2025-11-04T23:53:54.423330040Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 4 23:53:55.623136 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 4 23:53:55.628793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:53:55.866199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:53:55.878232 (kubelet)[2181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:53:55.983208 kubelet[2181]: E1104 23:53:55.983145 2181 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:53:55.988169 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:53:55.988492 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:53:55.989204 systemd[1]: kubelet.service: Consumed 267ms CPU time, 108M memory peak. Nov 4 23:53:56.375203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount470842770.mount: Deactivated successfully. Nov 4 23:53:56.681914 systemd-resolved[1280]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Nov 4 23:53:56.834645 containerd[1598]: time="2025-11-04T23:53:56.833228629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:56.834645 containerd[1598]: time="2025-11-04T23:53:56.834398384Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Nov 4 23:53:56.834645 containerd[1598]: time="2025-11-04T23:53:56.834524677Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:56.836257 containerd[1598]: time="2025-11-04T23:53:56.836219942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:56.837027 containerd[1598]: time="2025-11-04T23:53:56.836980211Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 2.413390552s" Nov 4 23:53:56.837027 containerd[1598]: time="2025-11-04T23:53:56.837020018Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 4 23:53:56.837753 containerd[1598]: time="2025-11-04T23:53:56.837715900Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 4 23:53:57.895359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3857550002.mount: Deactivated successfully. Nov 4 23:53:58.877265 containerd[1598]: time="2025-11-04T23:53:58.877191099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:58.878429 containerd[1598]: time="2025-11-04T23:53:58.878214173Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Nov 4 23:53:58.879074 containerd[1598]: time="2025-11-04T23:53:58.879043021Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:58.881859 containerd[1598]: time="2025-11-04T23:53:58.881822265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:58.883195 containerd[1598]: time="2025-11-04T23:53:58.883157335Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.045409451s" Nov 4 23:53:58.883339 containerd[1598]: time="2025-11-04T23:53:58.883321697Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 4 23:53:58.884254 containerd[1598]: time="2025-11-04T23:53:58.884223982Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 4 23:53:59.769623 systemd-resolved[1280]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Nov 4 23:53:59.882384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount872117829.mount: Deactivated successfully. Nov 4 23:53:59.886620 containerd[1598]: time="2025-11-04T23:53:59.885711687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:59.886620 containerd[1598]: time="2025-11-04T23:53:59.886505002Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Nov 4 23:53:59.887269 containerd[1598]: time="2025-11-04T23:53:59.887236277Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:59.889107 containerd[1598]: time="2025-11-04T23:53:59.889066935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:59.889780 containerd[1598]: time="2025-11-04T23:53:59.889747210Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.005494883s" Nov 4 23:53:59.889780 containerd[1598]: time="2025-11-04T23:53:59.889780299Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 4 23:53:59.890316 containerd[1598]: time="2025-11-04T23:53:59.890293849Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 4 23:54:03.487239 containerd[1598]: time="2025-11-04T23:54:03.486590621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:03.488967 containerd[1598]: time="2025-11-04T23:54:03.488721167Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Nov 4 23:54:03.491709 containerd[1598]: time="2025-11-04T23:54:03.490932476Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:03.495256 containerd[1598]: time="2025-11-04T23:54:03.495190110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:03.497143 containerd[1598]: time="2025-11-04T23:54:03.497082067Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.606714111s" Nov 4 23:54:03.497417 containerd[1598]: time="2025-11-04T23:54:03.497379432Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 4 23:54:06.152726 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 4 23:54:06.154911 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:54:06.373867 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:54:06.386201 (kubelet)[2321]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:54:06.476997 kubelet[2321]: E1104 23:54:06.476042 2321 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:54:06.481538 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:54:06.482447 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:54:06.484854 systemd[1]: kubelet.service: Consumed 241ms CPU time, 108M memory peak. Nov 4 23:54:08.039455 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:54:08.039633 systemd[1]: kubelet.service: Consumed 241ms CPU time, 108M memory peak. Nov 4 23:54:08.042819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:54:08.099280 systemd[1]: Reload requested from client PID 2335 ('systemctl') (unit session-9.scope)... Nov 4 23:54:08.099328 systemd[1]: Reloading... Nov 4 23:54:08.320756 zram_generator::config[2382]: No configuration found. Nov 4 23:54:08.620546 systemd[1]: Reloading finished in 520 ms. Nov 4 23:54:08.688493 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 4 23:54:08.688891 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 4 23:54:08.689677 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:54:08.689865 systemd[1]: kubelet.service: Consumed 176ms CPU time, 98.3M memory peak. Nov 4 23:54:08.692385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:54:08.880577 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:54:08.896225 (kubelet)[2433]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 23:54:08.964456 kubelet[2433]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 23:54:08.965289 kubelet[2433]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:54:08.965874 kubelet[2433]: I1104 23:54:08.965556 2433 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 23:54:10.166471 kubelet[2433]: I1104 23:54:10.166424 2433 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 4 23:54:10.167214 kubelet[2433]: I1104 23:54:10.167011 2433 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 23:54:10.169051 kubelet[2433]: I1104 23:54:10.168972 2433 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 4 23:54:10.170686 kubelet[2433]: I1104 23:54:10.169903 2433 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 23:54:10.170686 kubelet[2433]: I1104 23:54:10.170251 2433 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 23:54:10.180369 kubelet[2433]: I1104 23:54:10.180329 2433 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 23:54:10.187580 kubelet[2433]: E1104 23:54:10.186278 2433 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://137.184.235.85:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 137.184.235.85:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 4 23:54:10.196835 kubelet[2433]: I1104 23:54:10.196788 2433 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 23:54:10.206335 kubelet[2433]: I1104 23:54:10.206284 2433 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 4 23:54:10.207225 kubelet[2433]: I1104 23:54:10.207154 2433 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 23:54:10.208776 kubelet[2433]: I1104 23:54:10.207210 2433 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.0-n-b9f348caa0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 23:54:10.208776 kubelet[2433]: I1104 23:54:10.208775 2433 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 23:54:10.208776 kubelet[2433]: I1104 23:54:10.208789 2433 container_manager_linux.go:306] "Creating device plugin manager" Nov 4 23:54:10.209009 kubelet[2433]: I1104 23:54:10.208902 2433 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 4 23:54:10.211116 kubelet[2433]: I1104 23:54:10.211074 2433 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:54:10.211328 kubelet[2433]: I1104 23:54:10.211303 2433 kubelet.go:475] "Attempting to sync node with API server" Nov 4 23:54:10.211328 kubelet[2433]: I1104 23:54:10.211321 2433 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 23:54:10.211431 kubelet[2433]: I1104 23:54:10.211359 2433 kubelet.go:387] "Adding apiserver pod source" Nov 4 23:54:10.211431 kubelet[2433]: I1104 23:54:10.211389 2433 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 23:54:10.220505 kubelet[2433]: E1104 23:54:10.220212 2433 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://137.184.235.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 137.184.235.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 23:54:10.220505 kubelet[2433]: E1104 23:54:10.220377 2433 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://137.184.235.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.0-n-b9f348caa0&limit=500&resourceVersion=0\": dial tcp 137.184.235.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 23:54:10.223871 kubelet[2433]: I1104 23:54:10.223585 2433 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 23:54:10.227713 kubelet[2433]: I1104 23:54:10.226966 2433 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 23:54:10.227713 kubelet[2433]: I1104 23:54:10.227023 2433 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 4 23:54:10.227713 kubelet[2433]: W1104 23:54:10.227107 2433 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 4 23:54:10.231875 kubelet[2433]: I1104 23:54:10.231851 2433 server.go:1262] "Started kubelet" Nov 4 23:54:10.233421 kubelet[2433]: I1104 23:54:10.233358 2433 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 23:54:10.238426 kubelet[2433]: E1104 23:54:10.236890 2433 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://137.184.235.85:6443/api/v1/namespaces/default/events\": dial tcp 137.184.235.85:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4487.0.0-n-b9f348caa0.1874f2ec2393f463 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4487.0.0-n-b9f348caa0,UID:ci-4487.0.0-n-b9f348caa0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4487.0.0-n-b9f348caa0,},FirstTimestamp:2025-11-04 23:54:10.231809123 +0000 UTC m=+1.329410115,LastTimestamp:2025-11-04 23:54:10.231809123 +0000 UTC m=+1.329410115,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487.0.0-n-b9f348caa0,}" Nov 4 23:54:10.239010 kubelet[2433]: I1104 23:54:10.238981 2433 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 23:54:10.246701 kubelet[2433]: I1104 23:54:10.246610 2433 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 4 23:54:10.246992 kubelet[2433]: E1104 23:54:10.246922 2433 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4487.0.0-n-b9f348caa0\" not found" Nov 4 23:54:10.247279 kubelet[2433]: I1104 23:54:10.247263 2433 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 4 23:54:10.247334 kubelet[2433]: I1104 23:54:10.247322 2433 reconciler.go:29] "Reconciler: start to sync state" Nov 4 23:54:10.248433 kubelet[2433]: E1104 23:54:10.247698 2433 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://137.184.235.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 137.184.235.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 23:54:10.248433 kubelet[2433]: E1104 23:54:10.247812 2433 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.235.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-n-b9f348caa0?timeout=10s\": dial tcp 137.184.235.85:6443: connect: connection refused" interval="200ms" Nov 4 23:54:10.248590 kubelet[2433]: I1104 23:54:10.248544 2433 server.go:310] "Adding debug handlers to kubelet server" Nov 4 23:54:10.251606 kubelet[2433]: I1104 23:54:10.251548 2433 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 23:54:10.251733 kubelet[2433]: I1104 23:54:10.251621 2433 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 4 23:54:10.251942 kubelet[2433]: I1104 23:54:10.251926 2433 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 23:54:10.255929 kubelet[2433]: I1104 23:54:10.255896 2433 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 23:54:10.256597 kubelet[2433]: I1104 23:54:10.256554 2433 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 23:54:10.260856 kubelet[2433]: E1104 23:54:10.260804 2433 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 23:54:10.263703 kubelet[2433]: I1104 23:54:10.263495 2433 factory.go:223] Registration of the containerd container factory successfully Nov 4 23:54:10.263703 kubelet[2433]: I1104 23:54:10.263516 2433 factory.go:223] Registration of the systemd container factory successfully Nov 4 23:54:10.281525 kubelet[2433]: I1104 23:54:10.281484 2433 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 4 23:54:10.284963 kubelet[2433]: I1104 23:54:10.284917 2433 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 4 23:54:10.284963 kubelet[2433]: I1104 23:54:10.284948 2433 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 4 23:54:10.285263 kubelet[2433]: I1104 23:54:10.284985 2433 kubelet.go:2427] "Starting kubelet main sync loop" Nov 4 23:54:10.285263 kubelet[2433]: E1104 23:54:10.285037 2433 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 23:54:10.289776 kubelet[2433]: E1104 23:54:10.289084 2433 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://137.184.235.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 137.184.235.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 23:54:10.289776 kubelet[2433]: I1104 23:54:10.289630 2433 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 23:54:10.289776 kubelet[2433]: I1104 23:54:10.289642 2433 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 23:54:10.289776 kubelet[2433]: I1104 23:54:10.289697 2433 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:54:10.291399 kubelet[2433]: I1104 23:54:10.291376 2433 policy_none.go:49] "None policy: Start" Nov 4 23:54:10.291399 kubelet[2433]: I1104 23:54:10.291396 2433 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 4 23:54:10.291399 kubelet[2433]: I1104 23:54:10.291408 2433 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 4 23:54:10.293353 kubelet[2433]: I1104 23:54:10.293330 2433 policy_none.go:47] "Start" Nov 4 23:54:10.298799 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 4 23:54:10.314287 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 4 23:54:10.318364 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 4 23:54:10.337367 kubelet[2433]: E1104 23:54:10.337333 2433 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 23:54:10.337785 kubelet[2433]: I1104 23:54:10.337767 2433 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 23:54:10.338033 kubelet[2433]: I1104 23:54:10.337994 2433 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 23:54:10.340689 kubelet[2433]: I1104 23:54:10.340027 2433 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 23:54:10.340942 kubelet[2433]: E1104 23:54:10.340723 2433 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 23:54:10.341696 kubelet[2433]: E1104 23:54:10.341444 2433 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4487.0.0-n-b9f348caa0\" not found" Nov 4 23:54:10.400981 systemd[1]: Created slice kubepods-burstable-pod65b366eeafc9f8eaafbbf7578ec51d5f.slice - libcontainer container kubepods-burstable-pod65b366eeafc9f8eaafbbf7578ec51d5f.slice. Nov 4 23:54:10.419037 kubelet[2433]: E1104 23:54:10.418927 2433 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-b9f348caa0\" not found" node="ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:10.425070 systemd[1]: Created slice kubepods-burstable-podf2d41a415c1a05e6d9271292502cea17.slice - libcontainer container kubepods-burstable-podf2d41a415c1a05e6d9271292502cea17.slice. Nov 4 23:54:10.428438 kubelet[2433]: E1104 23:54:10.428404 2433 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-b9f348caa0\" not found" node="ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:10.437441 systemd[1]: Created slice kubepods-burstable-pod15984fd1b414c2e0cc0cc47faca708a8.slice - libcontainer container kubepods-burstable-pod15984fd1b414c2e0cc0cc47faca708a8.slice. Nov 4 23:54:10.440556 kubelet[2433]: I1104 23:54:10.440204 2433 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:10.440556 kubelet[2433]: E1104 23:54:10.440313 2433 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-b9f348caa0\" not found" node="ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:10.440800 kubelet[2433]: E1104 23:54:10.440655 2433 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://137.184.235.85:6443/api/v1/nodes\": dial tcp 137.184.235.85:6443: connect: connection refused" node="ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:10.448190 kubelet[2433]: I1104 23:54:10.448135 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65b366eeafc9f8eaafbbf7578ec51d5f-k8s-certs\") pod \"kube-apiserver-ci-4487.0.0-n-b9f348caa0\" (UID: \"65b366eeafc9f8eaafbbf7578ec51d5f\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:10.448453 kubelet[2433]: I1104 23:54:10.448421 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f2d41a415c1a05e6d9271292502cea17-ca-certs\") pod \"kube-controller-manager-ci-4487.0.0-n-b9f348caa0\" (UID: \"f2d41a415c1a05e6d9271292502cea17\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:10.448453 kubelet[2433]: I1104 23:54:10.448452 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f2d41a415c1a05e6d9271292502cea17-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.0-n-b9f348caa0\" (UID: \"f2d41a415c1a05e6d9271292502cea17\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:10.448571 kubelet[2433]: I1104 23:54:10.448475 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f2d41a415c1a05e6d9271292502cea17-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.0-n-b9f348caa0\" (UID: \"f2d41a415c1a05e6d9271292502cea17\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:10.448571 kubelet[2433]: I1104 23:54:10.448509 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f2d41a415c1a05e6d9271292502cea17-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.0-n-b9f348caa0\" (UID: \"f2d41a415c1a05e6d9271292502cea17\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:10.448571 kubelet[2433]: I1104 23:54:10.448536 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/15984fd1b414c2e0cc0cc47faca708a8-kubeconfig\") pod \"kube-scheduler-ci-4487.0.0-n-b9f348caa0\" (UID: \"15984fd1b414c2e0cc0cc47faca708a8\") " pod="kube-system/kube-scheduler-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:10.448571 kubelet[2433]: I1104 23:54:10.448559 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65b366eeafc9f8eaafbbf7578ec51d5f-ca-certs\") pod \"kube-apiserver-ci-4487.0.0-n-b9f348caa0\" (UID: \"65b366eeafc9f8eaafbbf7578ec51d5f\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:10.448691 kubelet[2433]: I1104 23:54:10.448579 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65b366eeafc9f8eaafbbf7578ec51d5f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.0-n-b9f348caa0\" (UID: \"65b366eeafc9f8eaafbbf7578ec51d5f\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:10.448691 kubelet[2433]: I1104 23:54:10.448594 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f2d41a415c1a05e6d9271292502cea17-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.0-n-b9f348caa0\" (UID: \"f2d41a415c1a05e6d9271292502cea17\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:10.448691 kubelet[2433]: E1104 23:54:10.448346 2433 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.235.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-n-b9f348caa0?timeout=10s\": dial tcp 137.184.235.85:6443: connect: connection refused" interval="400ms" Nov 4 23:54:10.642190 kubelet[2433]: I1104 23:54:10.642140 2433 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:10.642631 kubelet[2433]: E1104 23:54:10.642600 2433 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://137.184.235.85:6443/api/v1/nodes\": dial tcp 137.184.235.85:6443: connect: connection refused" node="ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:10.724773 kubelet[2433]: E1104 23:54:10.723956 2433 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:10.726477 containerd[1598]: time="2025-11-04T23:54:10.726425131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.0-n-b9f348caa0,Uid:65b366eeafc9f8eaafbbf7578ec51d5f,Namespace:kube-system,Attempt:0,}" Nov 4 23:54:10.731412 kubelet[2433]: E1104 23:54:10.731071 2433 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:10.744193 kubelet[2433]: E1104 23:54:10.744152 2433 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:10.750433 containerd[1598]: time="2025-11-04T23:54:10.750219245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.0-n-b9f348caa0,Uid:f2d41a415c1a05e6d9271292502cea17,Namespace:kube-system,Attempt:0,}" Nov 4 23:54:10.751548 containerd[1598]: time="2025-11-04T23:54:10.751339259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.0-n-b9f348caa0,Uid:15984fd1b414c2e0cc0cc47faca708a8,Namespace:kube-system,Attempt:0,}" Nov 4 23:54:10.850195 kubelet[2433]: E1104 23:54:10.850141 2433 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.235.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-n-b9f348caa0?timeout=10s\": dial tcp 137.184.235.85:6443: connect: connection refused" interval="800ms" Nov 4 23:54:11.045802 kubelet[2433]: I1104 23:54:11.045569 2433 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:11.046170 kubelet[2433]: E1104 23:54:11.045981 2433 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://137.184.235.85:6443/api/v1/nodes\": dial tcp 137.184.235.85:6443: connect: connection refused" node="ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:11.223607 kubelet[2433]: E1104 23:54:11.223529 2433 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://137.184.235.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 137.184.235.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 23:54:11.224249 kubelet[2433]: E1104 23:54:11.223980 2433 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://137.184.235.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.0-n-b9f348caa0&limit=500&resourceVersion=0\": dial tcp 137.184.235.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 23:54:11.320960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount53939290.mount: Deactivated successfully. Nov 4 23:54:11.324808 containerd[1598]: time="2025-11-04T23:54:11.324760548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:54:11.326280 containerd[1598]: time="2025-11-04T23:54:11.326216965Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 4 23:54:11.327107 containerd[1598]: time="2025-11-04T23:54:11.327066838Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:54:11.329615 containerd[1598]: time="2025-11-04T23:54:11.329061398Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:54:11.330760 containerd[1598]: time="2025-11-04T23:54:11.330732080Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 4 23:54:11.331780 containerd[1598]: time="2025-11-04T23:54:11.331752023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:54:11.332890 containerd[1598]: time="2025-11-04T23:54:11.332866459Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 575.145061ms" Nov 4 23:54:11.334812 containerd[1598]: time="2025-11-04T23:54:11.334782330Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:54:11.334883 containerd[1598]: time="2025-11-04T23:54:11.334823441Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 4 23:54:11.341517 containerd[1598]: time="2025-11-04T23:54:11.341469764Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 582.825459ms" Nov 4 23:54:11.346163 containerd[1598]: time="2025-11-04T23:54:11.346115596Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 588.394542ms" Nov 4 23:54:11.455221 containerd[1598]: time="2025-11-04T23:54:11.455138670Z" level=info msg="connecting to shim db7ed858d0ea71fe6361affb4100e7346fbd930f19981dc4ca8c9d3657441e7e" address="unix:///run/containerd/s/a8d84c0ab827aa2d2f25a02af1872df2cb15f34ec197e50a3652dbf703bdef15" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:54:11.456845 containerd[1598]: time="2025-11-04T23:54:11.456650362Z" level=info msg="connecting to shim 13ec86b64e8bfd09da13f3c5c79af6931861b3e57e4fa85b7957d1b398dab88f" address="unix:///run/containerd/s/62cca7a0fdbb10f60a6abfde519410840232bce14a7a9af0847fc66347920b6f" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:54:11.462315 containerd[1598]: time="2025-11-04T23:54:11.462239758Z" level=info msg="connecting to shim a0ef792f3a7067dd26f211fdcdfd272f260b6e746d16ae999a3dda13be6f2615" address="unix:///run/containerd/s/bf534eb163fa2f543d652524ff9f8f5c9ba40a917ed9f63d6bf43250d27d4d2f" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:54:11.475811 kubelet[2433]: E1104 23:54:11.475770 2433 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://137.184.235.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 137.184.235.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 23:54:11.566925 systemd[1]: Started cri-containerd-13ec86b64e8bfd09da13f3c5c79af6931861b3e57e4fa85b7957d1b398dab88f.scope - libcontainer container 13ec86b64e8bfd09da13f3c5c79af6931861b3e57e4fa85b7957d1b398dab88f. Nov 4 23:54:11.579881 systemd[1]: Started cri-containerd-a0ef792f3a7067dd26f211fdcdfd272f260b6e746d16ae999a3dda13be6f2615.scope - libcontainer container a0ef792f3a7067dd26f211fdcdfd272f260b6e746d16ae999a3dda13be6f2615. Nov 4 23:54:11.581208 systemd[1]: Started cri-containerd-db7ed858d0ea71fe6361affb4100e7346fbd930f19981dc4ca8c9d3657441e7e.scope - libcontainer container db7ed858d0ea71fe6361affb4100e7346fbd930f19981dc4ca8c9d3657441e7e. Nov 4 23:54:11.651737 kubelet[2433]: E1104 23:54:11.650886 2433 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.235.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-n-b9f348caa0?timeout=10s\": dial tcp 137.184.235.85:6443: connect: connection refused" interval="1.6s" Nov 4 23:54:11.677008 kubelet[2433]: E1104 23:54:11.676650 2433 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://137.184.235.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 137.184.235.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 23:54:11.680608 containerd[1598]: time="2025-11-04T23:54:11.680391509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.0-n-b9f348caa0,Uid:f2d41a415c1a05e6d9271292502cea17,Namespace:kube-system,Attempt:0,} returns sandbox id \"13ec86b64e8bfd09da13f3c5c79af6931861b3e57e4fa85b7957d1b398dab88f\"" Nov 4 23:54:11.681911 containerd[1598]: time="2025-11-04T23:54:11.681845220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.0-n-b9f348caa0,Uid:15984fd1b414c2e0cc0cc47faca708a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"db7ed858d0ea71fe6361affb4100e7346fbd930f19981dc4ca8c9d3657441e7e\"" Nov 4 23:54:11.682293 kubelet[2433]: E1104 23:54:11.682259 2433 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:11.683082 kubelet[2433]: E1104 23:54:11.682749 2433 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:11.688117 containerd[1598]: time="2025-11-04T23:54:11.688080562Z" level=info msg="CreateContainer within sandbox \"db7ed858d0ea71fe6361affb4100e7346fbd930f19981dc4ca8c9d3657441e7e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 4 23:54:11.691613 containerd[1598]: time="2025-11-04T23:54:11.690687173Z" level=info msg="CreateContainer within sandbox \"13ec86b64e8bfd09da13f3c5c79af6931861b3e57e4fa85b7957d1b398dab88f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 4 23:54:11.700044 containerd[1598]: time="2025-11-04T23:54:11.699990414Z" level=info msg="Container 7e87183078252e1c6e2a45c6c9666e20b0df9443fe948bf2b1ee36715776109e: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:11.702528 containerd[1598]: time="2025-11-04T23:54:11.702331696Z" level=info msg="Container 1d588c767508c27ecf3734a62aab12966c108535a7f41d912a530f78c758f27c: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:11.703699 containerd[1598]: time="2025-11-04T23:54:11.703634036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.0-n-b9f348caa0,Uid:65b366eeafc9f8eaafbbf7578ec51d5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0ef792f3a7067dd26f211fdcdfd272f260b6e746d16ae999a3dda13be6f2615\"" Nov 4 23:54:11.704749 kubelet[2433]: E1104 23:54:11.704679 2433 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:11.708688 containerd[1598]: time="2025-11-04T23:54:11.708627452Z" level=info msg="CreateContainer within sandbox \"a0ef792f3a7067dd26f211fdcdfd272f260b6e746d16ae999a3dda13be6f2615\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 4 23:54:11.715731 containerd[1598]: time="2025-11-04T23:54:11.715682457Z" level=info msg="CreateContainer within sandbox \"db7ed858d0ea71fe6361affb4100e7346fbd930f19981dc4ca8c9d3657441e7e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1d588c767508c27ecf3734a62aab12966c108535a7f41d912a530f78c758f27c\"" Nov 4 23:54:11.716859 containerd[1598]: time="2025-11-04T23:54:11.716820182Z" level=info msg="StartContainer for \"1d588c767508c27ecf3734a62aab12966c108535a7f41d912a530f78c758f27c\"" Nov 4 23:54:11.718147 containerd[1598]: time="2025-11-04T23:54:11.718075747Z" level=info msg="connecting to shim 1d588c767508c27ecf3734a62aab12966c108535a7f41d912a530f78c758f27c" address="unix:///run/containerd/s/a8d84c0ab827aa2d2f25a02af1872df2cb15f34ec197e50a3652dbf703bdef15" protocol=ttrpc version=3 Nov 4 23:54:11.719692 containerd[1598]: time="2025-11-04T23:54:11.719004370Z" level=info msg="Container 4d8f463b6ca71feec00eb9f9ebc3a665b3b498927d8d2d9ff10143de1c3de9dc: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:11.722811 containerd[1598]: time="2025-11-04T23:54:11.722389553Z" level=info msg="CreateContainer within sandbox \"13ec86b64e8bfd09da13f3c5c79af6931861b3e57e4fa85b7957d1b398dab88f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7e87183078252e1c6e2a45c6c9666e20b0df9443fe948bf2b1ee36715776109e\"" Nov 4 23:54:11.724380 containerd[1598]: time="2025-11-04T23:54:11.724344914Z" level=info msg="StartContainer for \"7e87183078252e1c6e2a45c6c9666e20b0df9443fe948bf2b1ee36715776109e\"" Nov 4 23:54:11.725407 containerd[1598]: time="2025-11-04T23:54:11.725379805Z" level=info msg="connecting to shim 7e87183078252e1c6e2a45c6c9666e20b0df9443fe948bf2b1ee36715776109e" address="unix:///run/containerd/s/62cca7a0fdbb10f60a6abfde519410840232bce14a7a9af0847fc66347920b6f" protocol=ttrpc version=3 Nov 4 23:54:11.732065 containerd[1598]: time="2025-11-04T23:54:11.732013001Z" level=info msg="CreateContainer within sandbox \"a0ef792f3a7067dd26f211fdcdfd272f260b6e746d16ae999a3dda13be6f2615\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4d8f463b6ca71feec00eb9f9ebc3a665b3b498927d8d2d9ff10143de1c3de9dc\"" Nov 4 23:54:11.733588 containerd[1598]: time="2025-11-04T23:54:11.733482768Z" level=info msg="StartContainer for \"4d8f463b6ca71feec00eb9f9ebc3a665b3b498927d8d2d9ff10143de1c3de9dc\"" Nov 4 23:54:11.734647 containerd[1598]: time="2025-11-04T23:54:11.734606253Z" level=info msg="connecting to shim 4d8f463b6ca71feec00eb9f9ebc3a665b3b498927d8d2d9ff10143de1c3de9dc" address="unix:///run/containerd/s/bf534eb163fa2f543d652524ff9f8f5c9ba40a917ed9f63d6bf43250d27d4d2f" protocol=ttrpc version=3 Nov 4 23:54:11.757990 systemd[1]: Started cri-containerd-7e87183078252e1c6e2a45c6c9666e20b0df9443fe948bf2b1ee36715776109e.scope - libcontainer container 7e87183078252e1c6e2a45c6c9666e20b0df9443fe948bf2b1ee36715776109e. Nov 4 23:54:11.763235 systemd[1]: Started cri-containerd-1d588c767508c27ecf3734a62aab12966c108535a7f41d912a530f78c758f27c.scope - libcontainer container 1d588c767508c27ecf3734a62aab12966c108535a7f41d912a530f78c758f27c. Nov 4 23:54:11.782876 systemd[1]: Started cri-containerd-4d8f463b6ca71feec00eb9f9ebc3a665b3b498927d8d2d9ff10143de1c3de9dc.scope - libcontainer container 4d8f463b6ca71feec00eb9f9ebc3a665b3b498927d8d2d9ff10143de1c3de9dc. Nov 4 23:54:11.847894 kubelet[2433]: I1104 23:54:11.847586 2433 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:11.849711 kubelet[2433]: E1104 23:54:11.849612 2433 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://137.184.235.85:6443/api/v1/nodes\": dial tcp 137.184.235.85:6443: connect: connection refused" node="ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:11.890319 containerd[1598]: time="2025-11-04T23:54:11.890187727Z" level=info msg="StartContainer for \"7e87183078252e1c6e2a45c6c9666e20b0df9443fe948bf2b1ee36715776109e\" returns successfully" Nov 4 23:54:11.898595 containerd[1598]: time="2025-11-04T23:54:11.898546812Z" level=info msg="StartContainer for \"4d8f463b6ca71feec00eb9f9ebc3a665b3b498927d8d2d9ff10143de1c3de9dc\" returns successfully" Nov 4 23:54:11.908557 containerd[1598]: time="2025-11-04T23:54:11.908343236Z" level=info msg="StartContainer for \"1d588c767508c27ecf3734a62aab12966c108535a7f41d912a530f78c758f27c\" returns successfully" Nov 4 23:54:12.301606 kubelet[2433]: E1104 23:54:12.301498 2433 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-b9f348caa0\" not found" node="ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:12.303394 kubelet[2433]: E1104 23:54:12.302435 2433 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:12.305124 kubelet[2433]: E1104 23:54:12.305091 2433 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-b9f348caa0\" not found" node="ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:12.305457 kubelet[2433]: E1104 23:54:12.305438 2433 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:12.308690 kubelet[2433]: E1104 23:54:12.307153 2433 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-b9f348caa0\" not found" node="ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:12.309048 kubelet[2433]: E1104 23:54:12.309031 2433 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:13.311689 kubelet[2433]: E1104 23:54:13.311451 2433 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-b9f348caa0\" not found" node="ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:13.313357 kubelet[2433]: E1104 23:54:13.312718 2433 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:13.313914 kubelet[2433]: E1104 23:54:13.313889 2433 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-b9f348caa0\" not found" node="ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:13.314127 kubelet[2433]: E1104 23:54:13.314099 2433 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:13.454507 kubelet[2433]: I1104 23:54:13.454022 2433 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:13.897300 kubelet[2433]: E1104 23:54:13.897205 2433 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-b9f348caa0\" not found" node="ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:13.897919 kubelet[2433]: E1104 23:54:13.897840 2433 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:15.153161 kubelet[2433]: E1104 23:54:15.153123 2433 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4487.0.0-n-b9f348caa0\" not found" node="ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:15.215330 kubelet[2433]: I1104 23:54:15.215292 2433 apiserver.go:52] "Watching apiserver" Nov 4 23:54:15.227223 kubelet[2433]: E1104 23:54:15.227086 2433 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4487.0.0-n-b9f348caa0.1874f2ec2393f463 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4487.0.0-n-b9f348caa0,UID:ci-4487.0.0-n-b9f348caa0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4487.0.0-n-b9f348caa0,},FirstTimestamp:2025-11-04 23:54:10.231809123 +0000 UTC m=+1.329410115,LastTimestamp:2025-11-04 23:54:10.231809123 +0000 UTC m=+1.329410115,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487.0.0-n-b9f348caa0,}" Nov 4 23:54:15.248160 kubelet[2433]: I1104 23:54:15.248073 2433 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 4 23:54:15.272844 kubelet[2433]: I1104 23:54:15.272804 2433 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:15.333508 kubelet[2433]: E1104 23:54:15.333369 2433 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4487.0.0-n-b9f348caa0.1874f2ec254dfa27 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4487.0.0-n-b9f348caa0,UID:ci-4487.0.0-n-b9f348caa0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4487.0.0-n-b9f348caa0,},FirstTimestamp:2025-11-04 23:54:10.260777511 +0000 UTC m=+1.358378507,LastTimestamp:2025-11-04 23:54:10.260777511 +0000 UTC m=+1.358378507,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487.0.0-n-b9f348caa0,}" Nov 4 23:54:15.347485 kubelet[2433]: I1104 23:54:15.347421 2433 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:15.381691 kubelet[2433]: E1104 23:54:15.381631 2433 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.0-n-b9f348caa0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:15.381691 kubelet[2433]: I1104 23:54:15.381681 2433 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:15.390242 kubelet[2433]: E1104 23:54:15.390008 2433 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4487.0.0-n-b9f348caa0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:15.390242 kubelet[2433]: I1104 23:54:15.390242 2433 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:15.396025 kubelet[2433]: E1104 23:54:15.395969 2433 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487.0.0-n-b9f348caa0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:17.153954 update_engine[1571]: I20251104 23:54:17.153824 1571 update_attempter.cc:509] Updating boot flags... Nov 4 23:54:17.456388 systemd[1]: Reload requested from client PID 2733 ('systemctl') (unit session-9.scope)... Nov 4 23:54:17.456423 systemd[1]: Reloading... Nov 4 23:54:17.696730 zram_generator::config[2779]: No configuration found. Nov 4 23:54:18.085027 systemd[1]: Reloading finished in 627 ms. Nov 4 23:54:18.168505 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:54:18.193982 systemd[1]: kubelet.service: Deactivated successfully. Nov 4 23:54:18.194886 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:54:18.195314 systemd[1]: kubelet.service: Consumed 1.855s CPU time, 123.2M memory peak. Nov 4 23:54:18.200801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:54:18.393368 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:54:18.406579 (kubelet)[2830]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 23:54:18.492713 kubelet[2830]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 23:54:18.494689 kubelet[2830]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:54:18.494689 kubelet[2830]: I1104 23:54:18.493171 2830 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 23:54:18.501221 kubelet[2830]: I1104 23:54:18.501180 2830 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 4 23:54:18.501461 kubelet[2830]: I1104 23:54:18.501427 2830 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 23:54:18.504001 kubelet[2830]: I1104 23:54:18.503959 2830 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 4 23:54:18.504225 kubelet[2830]: I1104 23:54:18.504200 2830 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 23:54:18.504821 kubelet[2830]: I1104 23:54:18.504780 2830 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 23:54:18.508298 kubelet[2830]: I1104 23:54:18.508265 2830 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 4 23:54:18.514092 kubelet[2830]: I1104 23:54:18.514053 2830 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 23:54:18.523871 kubelet[2830]: I1104 23:54:18.523845 2830 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 23:54:18.529993 kubelet[2830]: I1104 23:54:18.529959 2830 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 4 23:54:18.530479 kubelet[2830]: I1104 23:54:18.530426 2830 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 23:54:18.530796 kubelet[2830]: I1104 23:54:18.530573 2830 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.0-n-b9f348caa0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 23:54:18.530972 kubelet[2830]: I1104 23:54:18.530959 2830 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 23:54:18.531034 kubelet[2830]: I1104 23:54:18.531024 2830 container_manager_linux.go:306] "Creating device plugin manager" Nov 4 23:54:18.531169 kubelet[2830]: I1104 23:54:18.531155 2830 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 4 23:54:18.533467 kubelet[2830]: I1104 23:54:18.533432 2830 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:54:18.533921 kubelet[2830]: I1104 23:54:18.533901 2830 kubelet.go:475] "Attempting to sync node with API server" Nov 4 23:54:18.534203 kubelet[2830]: I1104 23:54:18.534031 2830 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 23:54:18.534203 kubelet[2830]: I1104 23:54:18.534064 2830 kubelet.go:387] "Adding apiserver pod source" Nov 4 23:54:18.534203 kubelet[2830]: I1104 23:54:18.534090 2830 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 23:54:18.541543 kubelet[2830]: I1104 23:54:18.538980 2830 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 23:54:18.541543 kubelet[2830]: I1104 23:54:18.539756 2830 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 23:54:18.541543 kubelet[2830]: I1104 23:54:18.539805 2830 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 4 23:54:18.545180 kubelet[2830]: I1104 23:54:18.545153 2830 server.go:1262] "Started kubelet" Nov 4 23:54:18.547433 kubelet[2830]: I1104 23:54:18.547398 2830 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 23:54:18.560482 kubelet[2830]: I1104 23:54:18.560431 2830 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 23:54:18.561791 kubelet[2830]: I1104 23:54:18.561761 2830 server.go:310] "Adding debug handlers to kubelet server" Nov 4 23:54:18.585795 kubelet[2830]: I1104 23:54:18.562124 2830 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 23:54:18.589250 kubelet[2830]: I1104 23:54:18.589198 2830 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 4 23:54:18.590863 kubelet[2830]: I1104 23:54:18.590237 2830 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 23:54:18.591105 kubelet[2830]: I1104 23:54:18.569506 2830 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 4 23:54:18.592000 kubelet[2830]: E1104 23:54:18.569761 2830 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4487.0.0-n-b9f348caa0\" not found" Nov 4 23:54:18.593339 kubelet[2830]: I1104 23:54:18.567753 2830 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 23:54:18.594194 kubelet[2830]: I1104 23:54:18.569494 2830 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 4 23:54:18.595489 kubelet[2830]: I1104 23:54:18.595461 2830 factory.go:223] Registration of the systemd container factory successfully Nov 4 23:54:18.595772 kubelet[2830]: I1104 23:54:18.595742 2830 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 23:54:18.597127 kubelet[2830]: I1104 23:54:18.596589 2830 reconciler.go:29] "Reconciler: start to sync state" Nov 4 23:54:18.600725 kubelet[2830]: I1104 23:54:18.600702 2830 factory.go:223] Registration of the containerd container factory successfully Nov 4 23:54:18.629450 kubelet[2830]: I1104 23:54:18.629394 2830 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 4 23:54:18.631115 kubelet[2830]: I1104 23:54:18.631084 2830 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 4 23:54:18.631452 kubelet[2830]: I1104 23:54:18.631439 2830 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 4 23:54:18.631568 kubelet[2830]: I1104 23:54:18.631559 2830 kubelet.go:2427] "Starting kubelet main sync loop" Nov 4 23:54:18.631762 kubelet[2830]: E1104 23:54:18.631658 2830 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 23:54:18.638511 kubelet[2830]: E1104 23:54:18.637910 2830 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 23:54:18.732617 kubelet[2830]: E1104 23:54:18.732123 2830 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 4 23:54:18.747610 kubelet[2830]: I1104 23:54:18.746269 2830 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 23:54:18.747610 kubelet[2830]: I1104 23:54:18.746296 2830 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 23:54:18.747610 kubelet[2830]: I1104 23:54:18.746351 2830 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:54:18.747610 kubelet[2830]: I1104 23:54:18.746625 2830 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 4 23:54:18.747610 kubelet[2830]: I1104 23:54:18.746644 2830 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 4 23:54:18.747610 kubelet[2830]: I1104 23:54:18.746709 2830 policy_none.go:49] "None policy: Start" Nov 4 23:54:18.747610 kubelet[2830]: I1104 23:54:18.746725 2830 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 4 23:54:18.747610 kubelet[2830]: I1104 23:54:18.746763 2830 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 4 23:54:18.747610 kubelet[2830]: I1104 23:54:18.746977 2830 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 4 23:54:18.747610 kubelet[2830]: I1104 23:54:18.747047 2830 policy_none.go:47] "Start" Nov 4 23:54:18.758057 kubelet[2830]: E1104 23:54:18.757870 2830 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 23:54:18.760444 kubelet[2830]: I1104 23:54:18.760118 2830 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 23:54:18.760444 kubelet[2830]: I1104 23:54:18.760145 2830 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 23:54:18.765581 kubelet[2830]: I1104 23:54:18.764889 2830 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 23:54:18.777062 kubelet[2830]: E1104 23:54:18.777011 2830 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 23:54:18.885855 kubelet[2830]: I1104 23:54:18.885791 2830 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:18.907840 kubelet[2830]: I1104 23:54:18.907491 2830 kubelet_node_status.go:124] "Node was previously registered" node="ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:18.907840 kubelet[2830]: I1104 23:54:18.907617 2830 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:18.938641 kubelet[2830]: I1104 23:54:18.938126 2830 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:18.940320 kubelet[2830]: I1104 23:54:18.939868 2830 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:18.951607 kubelet[2830]: I1104 23:54:18.948575 2830 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:18.957929 kubelet[2830]: I1104 23:54:18.957517 2830 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 4 23:54:18.960887 kubelet[2830]: I1104 23:54:18.960683 2830 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 4 23:54:18.967170 kubelet[2830]: I1104 23:54:18.966611 2830 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 4 23:54:19.014743 kubelet[2830]: I1104 23:54:19.014380 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65b366eeafc9f8eaafbbf7578ec51d5f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.0-n-b9f348caa0\" (UID: \"65b366eeafc9f8eaafbbf7578ec51d5f\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:19.014743 kubelet[2830]: I1104 23:54:19.014458 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f2d41a415c1a05e6d9271292502cea17-ca-certs\") pod \"kube-controller-manager-ci-4487.0.0-n-b9f348caa0\" (UID: \"f2d41a415c1a05e6d9271292502cea17\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:19.014743 kubelet[2830]: I1104 23:54:19.014488 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f2d41a415c1a05e6d9271292502cea17-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.0-n-b9f348caa0\" (UID: \"f2d41a415c1a05e6d9271292502cea17\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:19.014743 kubelet[2830]: I1104 23:54:19.014536 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f2d41a415c1a05e6d9271292502cea17-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.0-n-b9f348caa0\" (UID: \"f2d41a415c1a05e6d9271292502cea17\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:19.014743 kubelet[2830]: I1104 23:54:19.014562 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f2d41a415c1a05e6d9271292502cea17-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.0-n-b9f348caa0\" (UID: \"f2d41a415c1a05e6d9271292502cea17\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:19.015058 kubelet[2830]: I1104 23:54:19.014586 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f2d41a415c1a05e6d9271292502cea17-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.0-n-b9f348caa0\" (UID: \"f2d41a415c1a05e6d9271292502cea17\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:19.015058 kubelet[2830]: I1104 23:54:19.014608 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/15984fd1b414c2e0cc0cc47faca708a8-kubeconfig\") pod \"kube-scheduler-ci-4487.0.0-n-b9f348caa0\" (UID: \"15984fd1b414c2e0cc0cc47faca708a8\") " pod="kube-system/kube-scheduler-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:19.015058 kubelet[2830]: I1104 23:54:19.014633 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65b366eeafc9f8eaafbbf7578ec51d5f-ca-certs\") pod \"kube-apiserver-ci-4487.0.0-n-b9f348caa0\" (UID: \"65b366eeafc9f8eaafbbf7578ec51d5f\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:19.015058 kubelet[2830]: I1104 23:54:19.014656 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65b366eeafc9f8eaafbbf7578ec51d5f-k8s-certs\") pod \"kube-apiserver-ci-4487.0.0-n-b9f348caa0\" (UID: \"65b366eeafc9f8eaafbbf7578ec51d5f\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:19.258528 kubelet[2830]: E1104 23:54:19.258376 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:19.262974 kubelet[2830]: E1104 23:54:19.262930 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:19.268429 kubelet[2830]: E1104 23:54:19.268237 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:19.536186 kubelet[2830]: I1104 23:54:19.536016 2830 apiserver.go:52] "Watching apiserver" Nov 4 23:54:19.592474 kubelet[2830]: I1104 23:54:19.592364 2830 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 4 23:54:19.693916 kubelet[2830]: I1104 23:54:19.693868 2830 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:19.694440 kubelet[2830]: E1104 23:54:19.694389 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:19.695701 kubelet[2830]: E1104 23:54:19.695377 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:19.703869 kubelet[2830]: I1104 23:54:19.703383 2830 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 4 23:54:19.703869 kubelet[2830]: E1104 23:54:19.703454 2830 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.0-n-b9f348caa0\" already exists" pod="kube-system/kube-apiserver-ci-4487.0.0-n-b9f348caa0" Nov 4 23:54:19.703869 kubelet[2830]: E1104 23:54:19.703635 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:19.736475 kubelet[2830]: I1104 23:54:19.736285 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4487.0.0-n-b9f348caa0" podStartSLOduration=1.736267449 podStartE2EDuration="1.736267449s" podCreationTimestamp="2025-11-04 23:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:54:19.729252945 +0000 UTC m=+1.308727341" watchObservedRunningTime="2025-11-04 23:54:19.736267449 +0000 UTC m=+1.315741843" Nov 4 23:54:19.749677 kubelet[2830]: I1104 23:54:19.749553 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4487.0.0-n-b9f348caa0" podStartSLOduration=1.749536409 podStartE2EDuration="1.749536409s" podCreationTimestamp="2025-11-04 23:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:54:19.749274709 +0000 UTC m=+1.328749103" watchObservedRunningTime="2025-11-04 23:54:19.749536409 +0000 UTC m=+1.329010804" Nov 4 23:54:19.750249 kubelet[2830]: I1104 23:54:19.750053 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4487.0.0-n-b9f348caa0" podStartSLOduration=1.7500383149999998 podStartE2EDuration="1.750038315s" podCreationTimestamp="2025-11-04 23:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:54:19.739688821 +0000 UTC m=+1.319163210" watchObservedRunningTime="2025-11-04 23:54:19.750038315 +0000 UTC m=+1.329512710" Nov 4 23:54:20.697695 kubelet[2830]: E1104 23:54:20.697625 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:20.698807 kubelet[2830]: E1104 23:54:20.698549 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:21.700023 kubelet[2830]: E1104 23:54:21.699924 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:21.702331 kubelet[2830]: E1104 23:54:21.702100 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:22.652003 kubelet[2830]: I1104 23:54:22.651942 2830 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 4 23:54:22.652520 containerd[1598]: time="2025-11-04T23:54:22.652408567Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 4 23:54:22.653135 kubelet[2830]: I1104 23:54:22.652904 2830 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 4 23:54:23.402408 systemd[1]: Created slice kubepods-besteffort-pod85d4c8dc_140a_4f51_ad98_fd7459277b7a.slice - libcontainer container kubepods-besteffort-pod85d4c8dc_140a_4f51_ad98_fd7459277b7a.slice. Nov 4 23:54:23.448594 kubelet[2830]: I1104 23:54:23.448374 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wbz8\" (UniqueName: \"kubernetes.io/projected/85d4c8dc-140a-4f51-ad98-fd7459277b7a-kube-api-access-9wbz8\") pod \"kube-proxy-v8t9b\" (UID: \"85d4c8dc-140a-4f51-ad98-fd7459277b7a\") " pod="kube-system/kube-proxy-v8t9b" Nov 4 23:54:23.448594 kubelet[2830]: I1104 23:54:23.448429 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/85d4c8dc-140a-4f51-ad98-fd7459277b7a-kube-proxy\") pod \"kube-proxy-v8t9b\" (UID: \"85d4c8dc-140a-4f51-ad98-fd7459277b7a\") " pod="kube-system/kube-proxy-v8t9b" Nov 4 23:54:23.448594 kubelet[2830]: I1104 23:54:23.448486 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85d4c8dc-140a-4f51-ad98-fd7459277b7a-xtables-lock\") pod \"kube-proxy-v8t9b\" (UID: \"85d4c8dc-140a-4f51-ad98-fd7459277b7a\") " pod="kube-system/kube-proxy-v8t9b" Nov 4 23:54:23.448594 kubelet[2830]: I1104 23:54:23.448505 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85d4c8dc-140a-4f51-ad98-fd7459277b7a-lib-modules\") pod \"kube-proxy-v8t9b\" (UID: \"85d4c8dc-140a-4f51-ad98-fd7459277b7a\") " pod="kube-system/kube-proxy-v8t9b" Nov 4 23:54:23.719994 kubelet[2830]: E1104 23:54:23.719184 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:23.721794 containerd[1598]: time="2025-11-04T23:54:23.721713457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v8t9b,Uid:85d4c8dc-140a-4f51-ad98-fd7459277b7a,Namespace:kube-system,Attempt:0,}" Nov 4 23:54:23.747381 containerd[1598]: time="2025-11-04T23:54:23.747295464Z" level=info msg="connecting to shim 89c560db65084f8d50b40b7b9daf9571ce210a6b6d1acc4d4b7e0a9abdde5c38" address="unix:///run/containerd/s/43ef155f112f6c68fec76cd8c6b6301f99a2436cc2a5d7a6f287186d8f968218" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:54:23.791944 systemd[1]: Started cri-containerd-89c560db65084f8d50b40b7b9daf9571ce210a6b6d1acc4d4b7e0a9abdde5c38.scope - libcontainer container 89c560db65084f8d50b40b7b9daf9571ce210a6b6d1acc4d4b7e0a9abdde5c38. Nov 4 23:54:23.871642 containerd[1598]: time="2025-11-04T23:54:23.870772204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v8t9b,Uid:85d4c8dc-140a-4f51-ad98-fd7459277b7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"89c560db65084f8d50b40b7b9daf9571ce210a6b6d1acc4d4b7e0a9abdde5c38\"" Nov 4 23:54:23.875125 kubelet[2830]: E1104 23:54:23.875090 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:23.888734 containerd[1598]: time="2025-11-04T23:54:23.888109412Z" level=info msg="CreateContainer within sandbox \"89c560db65084f8d50b40b7b9daf9571ce210a6b6d1acc4d4b7e0a9abdde5c38\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 4 23:54:23.890543 systemd[1]: Created slice kubepods-besteffort-podd760cd3a_51a0_4796_b213_35f7927ae9bc.slice - libcontainer container kubepods-besteffort-podd760cd3a_51a0_4796_b213_35f7927ae9bc.slice. Nov 4 23:54:23.902280 kubelet[2830]: E1104 23:54:23.901216 2830 status_manager.go:1018] "Failed to get status for pod" err="pods \"tigera-operator-65cdcdfd6d-np6n9\" is forbidden: User \"system:node:ci-4487.0.0-n-b9f348caa0\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4487.0.0-n-b9f348caa0' and this object" podUID="d760cd3a-51a0-4796-b213-35f7927ae9bc" pod="tigera-operator/tigera-operator-65cdcdfd6d-np6n9" Nov 4 23:54:23.902280 kubelet[2830]: E1104 23:54:23.902201 2830 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4487.0.0-n-b9f348caa0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4487.0.0-n-b9f348caa0' and this object" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Nov 4 23:54:23.902468 kubelet[2830]: E1104 23:54:23.902323 2830 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ci-4487.0.0-n-b9f348caa0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4487.0.0-n-b9f348caa0' and this object" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kubernetes-services-endpoint\"" type="*v1.ConfigMap" Nov 4 23:54:23.921842 containerd[1598]: time="2025-11-04T23:54:23.918852182Z" level=info msg="Container 076c4a981ec7005102b823600fafe1e9c0d93f6d48ec2f493f0d8b9337142d90: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:23.926314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3686447567.mount: Deactivated successfully. Nov 4 23:54:23.936636 containerd[1598]: time="2025-11-04T23:54:23.936587342Z" level=info msg="CreateContainer within sandbox \"89c560db65084f8d50b40b7b9daf9571ce210a6b6d1acc4d4b7e0a9abdde5c38\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"076c4a981ec7005102b823600fafe1e9c0d93f6d48ec2f493f0d8b9337142d90\"" Nov 4 23:54:23.938275 containerd[1598]: time="2025-11-04T23:54:23.938223711Z" level=info msg="StartContainer for \"076c4a981ec7005102b823600fafe1e9c0d93f6d48ec2f493f0d8b9337142d90\"" Nov 4 23:54:23.942005 containerd[1598]: time="2025-11-04T23:54:23.941948008Z" level=info msg="connecting to shim 076c4a981ec7005102b823600fafe1e9c0d93f6d48ec2f493f0d8b9337142d90" address="unix:///run/containerd/s/43ef155f112f6c68fec76cd8c6b6301f99a2436cc2a5d7a6f287186d8f968218" protocol=ttrpc version=3 Nov 4 23:54:23.953194 kubelet[2830]: I1104 23:54:23.952987 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d760cd3a-51a0-4796-b213-35f7927ae9bc-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-np6n9\" (UID: \"d760cd3a-51a0-4796-b213-35f7927ae9bc\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-np6n9" Nov 4 23:54:23.953726 kubelet[2830]: I1104 23:54:23.953632 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqz8v\" (UniqueName: \"kubernetes.io/projected/d760cd3a-51a0-4796-b213-35f7927ae9bc-kube-api-access-dqz8v\") pod \"tigera-operator-65cdcdfd6d-np6n9\" (UID: \"d760cd3a-51a0-4796-b213-35f7927ae9bc\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-np6n9" Nov 4 23:54:23.981937 systemd[1]: Started cri-containerd-076c4a981ec7005102b823600fafe1e9c0d93f6d48ec2f493f0d8b9337142d90.scope - libcontainer container 076c4a981ec7005102b823600fafe1e9c0d93f6d48ec2f493f0d8b9337142d90. Nov 4 23:54:24.040185 containerd[1598]: time="2025-11-04T23:54:24.039954536Z" level=info msg="StartContainer for \"076c4a981ec7005102b823600fafe1e9c0d93f6d48ec2f493f0d8b9337142d90\" returns successfully" Nov 4 23:54:24.711712 kubelet[2830]: E1104 23:54:24.710989 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:24.726019 kubelet[2830]: I1104 23:54:24.725962 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v8t9b" podStartSLOduration=1.7259445150000001 podStartE2EDuration="1.725944515s" podCreationTimestamp="2025-11-04 23:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:54:24.725504724 +0000 UTC m=+6.304979121" watchObservedRunningTime="2025-11-04 23:54:24.725944515 +0000 UTC m=+6.305418909" Nov 4 23:54:25.066405 kubelet[2830]: E1104 23:54:25.066103 2830 projected.go:291] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 4 23:54:25.066405 kubelet[2830]: E1104 23:54:25.066165 2830 projected.go:196] Error preparing data for projected volume kube-api-access-dqz8v for pod tigera-operator/tigera-operator-65cdcdfd6d-np6n9: failed to sync configmap cache: timed out waiting for the condition Nov 4 23:54:25.066405 kubelet[2830]: E1104 23:54:25.066266 2830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d760cd3a-51a0-4796-b213-35f7927ae9bc-kube-api-access-dqz8v podName:d760cd3a-51a0-4796-b213-35f7927ae9bc nodeName:}" failed. No retries permitted until 2025-11-04 23:54:25.566240985 +0000 UTC m=+7.145715359 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dqz8v" (UniqueName: "kubernetes.io/projected/d760cd3a-51a0-4796-b213-35f7927ae9bc-kube-api-access-dqz8v") pod "tigera-operator-65cdcdfd6d-np6n9" (UID: "d760cd3a-51a0-4796-b213-35f7927ae9bc") : failed to sync configmap cache: timed out waiting for the condition Nov 4 23:54:25.701825 containerd[1598]: time="2025-11-04T23:54:25.701763875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-np6n9,Uid:d760cd3a-51a0-4796-b213-35f7927ae9bc,Namespace:tigera-operator,Attempt:0,}" Nov 4 23:54:25.726916 containerd[1598]: time="2025-11-04T23:54:25.726852747Z" level=info msg="connecting to shim 71d33b6d29cf6fe138359300cbcdbdfc410a90d7723ec6eb04f8d075ff637dc0" address="unix:///run/containerd/s/daefa1b62327e716c6258ca16969ed9ae2c1b45e899997cd95b6375c4b44dd3a" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:54:25.764035 systemd[1]: Started cri-containerd-71d33b6d29cf6fe138359300cbcdbdfc410a90d7723ec6eb04f8d075ff637dc0.scope - libcontainer container 71d33b6d29cf6fe138359300cbcdbdfc410a90d7723ec6eb04f8d075ff637dc0. Nov 4 23:54:25.834455 containerd[1598]: time="2025-11-04T23:54:25.834397909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-np6n9,Uid:d760cd3a-51a0-4796-b213-35f7927ae9bc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"71d33b6d29cf6fe138359300cbcdbdfc410a90d7723ec6eb04f8d075ff637dc0\"" Nov 4 23:54:25.839319 containerd[1598]: time="2025-11-04T23:54:25.839183538Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 4 23:54:27.186279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3797300339.mount: Deactivated successfully. Nov 4 23:54:27.873162 containerd[1598]: time="2025-11-04T23:54:27.873096933Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:27.874065 containerd[1598]: time="2025-11-04T23:54:27.873899390Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 4 23:54:27.875686 containerd[1598]: time="2025-11-04T23:54:27.874602680Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:27.876565 containerd[1598]: time="2025-11-04T23:54:27.876490338Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:27.877831 containerd[1598]: time="2025-11-04T23:54:27.877800365Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.038554552s" Nov 4 23:54:27.877831 containerd[1598]: time="2025-11-04T23:54:27.877832372Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 4 23:54:27.884331 containerd[1598]: time="2025-11-04T23:54:27.884280626Z" level=info msg="CreateContainer within sandbox \"71d33b6d29cf6fe138359300cbcdbdfc410a90d7723ec6eb04f8d075ff637dc0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 4 23:54:27.894175 containerd[1598]: time="2025-11-04T23:54:27.893330670Z" level=info msg="Container bf9916fae8ed0e4566e0fe3fded480b15c0014d2f2b30ae213a9fb901b9dd26c: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:27.898728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2433023854.mount: Deactivated successfully. Nov 4 23:54:27.912102 containerd[1598]: time="2025-11-04T23:54:27.911974590Z" level=info msg="CreateContainer within sandbox \"71d33b6d29cf6fe138359300cbcdbdfc410a90d7723ec6eb04f8d075ff637dc0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bf9916fae8ed0e4566e0fe3fded480b15c0014d2f2b30ae213a9fb901b9dd26c\"" Nov 4 23:54:27.913729 containerd[1598]: time="2025-11-04T23:54:27.913466639Z" level=info msg="StartContainer for \"bf9916fae8ed0e4566e0fe3fded480b15c0014d2f2b30ae213a9fb901b9dd26c\"" Nov 4 23:54:27.916955 containerd[1598]: time="2025-11-04T23:54:27.916433561Z" level=info msg="connecting to shim bf9916fae8ed0e4566e0fe3fded480b15c0014d2f2b30ae213a9fb901b9dd26c" address="unix:///run/containerd/s/daefa1b62327e716c6258ca16969ed9ae2c1b45e899997cd95b6375c4b44dd3a" protocol=ttrpc version=3 Nov 4 23:54:27.947010 systemd[1]: Started cri-containerd-bf9916fae8ed0e4566e0fe3fded480b15c0014d2f2b30ae213a9fb901b9dd26c.scope - libcontainer container bf9916fae8ed0e4566e0fe3fded480b15c0014d2f2b30ae213a9fb901b9dd26c. Nov 4 23:54:27.994042 containerd[1598]: time="2025-11-04T23:54:27.993993126Z" level=info msg="StartContainer for \"bf9916fae8ed0e4566e0fe3fded480b15c0014d2f2b30ae213a9fb901b9dd26c\" returns successfully" Nov 4 23:54:28.763470 kubelet[2830]: E1104 23:54:28.763427 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:28.797998 kubelet[2830]: I1104 23:54:28.797879 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-np6n9" podStartSLOduration=3.755199787 podStartE2EDuration="5.797853486s" podCreationTimestamp="2025-11-04 23:54:23 +0000 UTC" firstStartedPulling="2025-11-04 23:54:25.836615711 +0000 UTC m=+7.416090107" lastFinishedPulling="2025-11-04 23:54:27.879269432 +0000 UTC m=+9.458743806" observedRunningTime="2025-11-04 23:54:28.738961686 +0000 UTC m=+10.318436083" watchObservedRunningTime="2025-11-04 23:54:28.797853486 +0000 UTC m=+10.377327882" Nov 4 23:54:29.729697 kubelet[2830]: E1104 23:54:29.728059 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:29.938203 kubelet[2830]: E1104 23:54:29.938171 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:30.413077 kubelet[2830]: E1104 23:54:30.412755 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:30.731248 kubelet[2830]: E1104 23:54:30.731128 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:30.731505 kubelet[2830]: E1104 23:54:30.731413 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:35.037921 sudo[1869]: pam_unix(sudo:session): session closed for user root Nov 4 23:54:35.043396 sshd[1868]: Connection closed by 139.178.89.65 port 42622 Nov 4 23:54:35.045088 sshd-session[1865]: pam_unix(sshd:session): session closed for user core Nov 4 23:54:35.053035 systemd-logind[1570]: Session 9 logged out. Waiting for processes to exit. Nov 4 23:54:35.053476 systemd[1]: sshd@8-137.184.235.85:22-139.178.89.65:42622.service: Deactivated successfully. Nov 4 23:54:35.058326 systemd[1]: session-9.scope: Deactivated successfully. Nov 4 23:54:35.058908 systemd[1]: session-9.scope: Consumed 7.169s CPU time, 166M memory peak. Nov 4 23:54:35.065254 systemd-logind[1570]: Removed session 9. Nov 4 23:54:41.495606 systemd[1]: Created slice kubepods-besteffort-podc1bf05f2_651e_49c0_ad5a_59944cb07ac1.slice - libcontainer container kubepods-besteffort-podc1bf05f2_651e_49c0_ad5a_59944cb07ac1.slice. Nov 4 23:54:41.581452 kubelet[2830]: I1104 23:54:41.581002 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1bf05f2-651e-49c0-ad5a-59944cb07ac1-tigera-ca-bundle\") pod \"calico-typha-5fc6b45c65-wvbm8\" (UID: \"c1bf05f2-651e-49c0-ad5a-59944cb07ac1\") " pod="calico-system/calico-typha-5fc6b45c65-wvbm8" Nov 4 23:54:41.582711 kubelet[2830]: I1104 23:54:41.582184 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c1bf05f2-651e-49c0-ad5a-59944cb07ac1-typha-certs\") pod \"calico-typha-5fc6b45c65-wvbm8\" (UID: \"c1bf05f2-651e-49c0-ad5a-59944cb07ac1\") " pod="calico-system/calico-typha-5fc6b45c65-wvbm8" Nov 4 23:54:41.582808 kubelet[2830]: I1104 23:54:41.582743 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqprr\" (UniqueName: \"kubernetes.io/projected/c1bf05f2-651e-49c0-ad5a-59944cb07ac1-kube-api-access-jqprr\") pod \"calico-typha-5fc6b45c65-wvbm8\" (UID: \"c1bf05f2-651e-49c0-ad5a-59944cb07ac1\") " pod="calico-system/calico-typha-5fc6b45c65-wvbm8" Nov 4 23:54:41.631925 systemd[1]: Created slice kubepods-besteffort-pod89316db5_7ecb_4b56_9f84_f98eef21a01a.slice - libcontainer container kubepods-besteffort-pod89316db5_7ecb_4b56_9f84_f98eef21a01a.slice. Nov 4 23:54:41.683311 kubelet[2830]: I1104 23:54:41.683238 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89316db5-7ecb-4b56-9f84-f98eef21a01a-lib-modules\") pod \"calico-node-v9r9l\" (UID: \"89316db5-7ecb-4b56-9f84-f98eef21a01a\") " pod="calico-system/calico-node-v9r9l" Nov 4 23:54:41.683983 kubelet[2830]: I1104 23:54:41.683950 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/89316db5-7ecb-4b56-9f84-f98eef21a01a-policysync\") pod \"calico-node-v9r9l\" (UID: \"89316db5-7ecb-4b56-9f84-f98eef21a01a\") " pod="calico-system/calico-node-v9r9l" Nov 4 23:54:41.683983 kubelet[2830]: I1104 23:54:41.683981 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/89316db5-7ecb-4b56-9f84-f98eef21a01a-var-lib-calico\") pod \"calico-node-v9r9l\" (UID: \"89316db5-7ecb-4b56-9f84-f98eef21a01a\") " pod="calico-system/calico-node-v9r9l" Nov 4 23:54:41.684171 kubelet[2830]: I1104 23:54:41.683997 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/89316db5-7ecb-4b56-9f84-f98eef21a01a-var-run-calico\") pod \"calico-node-v9r9l\" (UID: \"89316db5-7ecb-4b56-9f84-f98eef21a01a\") " pod="calico-system/calico-node-v9r9l" Nov 4 23:54:41.684171 kubelet[2830]: I1104 23:54:41.684029 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/89316db5-7ecb-4b56-9f84-f98eef21a01a-cni-log-dir\") pod \"calico-node-v9r9l\" (UID: \"89316db5-7ecb-4b56-9f84-f98eef21a01a\") " pod="calico-system/calico-node-v9r9l" Nov 4 23:54:41.684171 kubelet[2830]: I1104 23:54:41.684044 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/89316db5-7ecb-4b56-9f84-f98eef21a01a-flexvol-driver-host\") pod \"calico-node-v9r9l\" (UID: \"89316db5-7ecb-4b56-9f84-f98eef21a01a\") " pod="calico-system/calico-node-v9r9l" Nov 4 23:54:41.684171 kubelet[2830]: I1104 23:54:41.684059 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/89316db5-7ecb-4b56-9f84-f98eef21a01a-node-certs\") pod \"calico-node-v9r9l\" (UID: \"89316db5-7ecb-4b56-9f84-f98eef21a01a\") " pod="calico-system/calico-node-v9r9l" Nov 4 23:54:41.684171 kubelet[2830]: I1104 23:54:41.684074 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89316db5-7ecb-4b56-9f84-f98eef21a01a-tigera-ca-bundle\") pod \"calico-node-v9r9l\" (UID: \"89316db5-7ecb-4b56-9f84-f98eef21a01a\") " pod="calico-system/calico-node-v9r9l" Nov 4 23:54:41.684311 kubelet[2830]: I1104 23:54:41.684088 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/89316db5-7ecb-4b56-9f84-f98eef21a01a-cni-bin-dir\") pod \"calico-node-v9r9l\" (UID: \"89316db5-7ecb-4b56-9f84-f98eef21a01a\") " pod="calico-system/calico-node-v9r9l" Nov 4 23:54:41.684311 kubelet[2830]: I1104 23:54:41.684101 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/89316db5-7ecb-4b56-9f84-f98eef21a01a-cni-net-dir\") pod \"calico-node-v9r9l\" (UID: \"89316db5-7ecb-4b56-9f84-f98eef21a01a\") " pod="calico-system/calico-node-v9r9l" Nov 4 23:54:41.684311 kubelet[2830]: I1104 23:54:41.684117 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89316db5-7ecb-4b56-9f84-f98eef21a01a-xtables-lock\") pod \"calico-node-v9r9l\" (UID: \"89316db5-7ecb-4b56-9f84-f98eef21a01a\") " pod="calico-system/calico-node-v9r9l" Nov 4 23:54:41.684311 kubelet[2830]: I1104 23:54:41.684135 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bkkl\" (UniqueName: \"kubernetes.io/projected/89316db5-7ecb-4b56-9f84-f98eef21a01a-kube-api-access-2bkkl\") pod \"calico-node-v9r9l\" (UID: \"89316db5-7ecb-4b56-9f84-f98eef21a01a\") " pod="calico-system/calico-node-v9r9l" Nov 4 23:54:41.796877 kubelet[2830]: E1104 23:54:41.796778 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.796877 kubelet[2830]: W1104 23:54:41.796804 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.796877 kubelet[2830]: E1104 23:54:41.796829 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.804732 kubelet[2830]: E1104 23:54:41.804602 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:41.806252 containerd[1598]: time="2025-11-04T23:54:41.806109049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fc6b45c65-wvbm8,Uid:c1bf05f2-651e-49c0-ad5a-59944cb07ac1,Namespace:calico-system,Attempt:0,}" Nov 4 23:54:41.814899 kubelet[2830]: E1104 23:54:41.814789 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.814899 kubelet[2830]: W1104 23:54:41.814811 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.817144 kubelet[2830]: E1104 23:54:41.814832 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.854239 kubelet[2830]: E1104 23:54:41.854149 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2h4kv" podUID="907a3d1f-a9d8-4fa7-9529-2703403b5056" Nov 4 23:54:41.857882 containerd[1598]: time="2025-11-04T23:54:41.857840754Z" level=info msg="connecting to shim 7b3fed6139f8420da181eb2c7c02817851dc7241c69edd49e8284fd10e7bc5c3" address="unix:///run/containerd/s/31688c5e1a48c1df2b80e31ffcda335d67ea328d2b67ce1c9fd3a7cc1cd13b01" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:54:41.904316 systemd[1]: Started cri-containerd-7b3fed6139f8420da181eb2c7c02817851dc7241c69edd49e8284fd10e7bc5c3.scope - libcontainer container 7b3fed6139f8420da181eb2c7c02817851dc7241c69edd49e8284fd10e7bc5c3. Nov 4 23:54:41.941289 kubelet[2830]: E1104 23:54:41.940951 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:41.941737 containerd[1598]: time="2025-11-04T23:54:41.941706677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v9r9l,Uid:89316db5-7ecb-4b56-9f84-f98eef21a01a,Namespace:calico-system,Attempt:0,}" Nov 4 23:54:41.945950 kubelet[2830]: E1104 23:54:41.945898 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.945950 kubelet[2830]: W1104 23:54:41.945935 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.946312 kubelet[2830]: E1104 23:54:41.946019 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.946792 kubelet[2830]: E1104 23:54:41.946755 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.946792 kubelet[2830]: W1104 23:54:41.946787 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.947852 kubelet[2830]: E1104 23:54:41.946806 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.948217 kubelet[2830]: E1104 23:54:41.948184 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.948217 kubelet[2830]: W1104 23:54:41.948204 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.948384 kubelet[2830]: E1104 23:54:41.948235 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.948938 kubelet[2830]: E1104 23:54:41.948641 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.948938 kubelet[2830]: W1104 23:54:41.948680 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.948938 kubelet[2830]: E1104 23:54:41.948694 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.949889 kubelet[2830]: E1104 23:54:41.949866 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.949889 kubelet[2830]: W1104 23:54:41.949883 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.949991 kubelet[2830]: E1104 23:54:41.949899 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.950267 kubelet[2830]: E1104 23:54:41.950232 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.950267 kubelet[2830]: W1104 23:54:41.950248 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.950750 kubelet[2830]: E1104 23:54:41.950275 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.951031 kubelet[2830]: E1104 23:54:41.951013 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.951031 kubelet[2830]: W1104 23:54:41.951031 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.951114 kubelet[2830]: E1104 23:54:41.951055 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.952450 kubelet[2830]: E1104 23:54:41.951847 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.952450 kubelet[2830]: W1104 23:54:41.951865 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.952450 kubelet[2830]: E1104 23:54:41.951879 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.952611 kubelet[2830]: E1104 23:54:41.952580 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.952646 kubelet[2830]: W1104 23:54:41.952612 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.952646 kubelet[2830]: E1104 23:54:41.952626 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.953335 kubelet[2830]: E1104 23:54:41.953094 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.953335 kubelet[2830]: W1104 23:54:41.953112 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.953335 kubelet[2830]: E1104 23:54:41.953126 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.954586 kubelet[2830]: E1104 23:54:41.953778 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.954586 kubelet[2830]: W1104 23:54:41.953790 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.954586 kubelet[2830]: E1104 23:54:41.953804 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.954968 kubelet[2830]: E1104 23:54:41.954929 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.954968 kubelet[2830]: W1104 23:54:41.954942 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.954968 kubelet[2830]: E1104 23:54:41.954958 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.957085 kubelet[2830]: E1104 23:54:41.955284 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.957085 kubelet[2830]: W1104 23:54:41.955301 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.957085 kubelet[2830]: E1104 23:54:41.955315 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.957085 kubelet[2830]: E1104 23:54:41.955515 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.957085 kubelet[2830]: W1104 23:54:41.955524 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.957085 kubelet[2830]: E1104 23:54:41.955535 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.957085 kubelet[2830]: E1104 23:54:41.956040 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.957085 kubelet[2830]: W1104 23:54:41.956053 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.957085 kubelet[2830]: E1104 23:54:41.956168 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.957085 kubelet[2830]: E1104 23:54:41.956808 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.957447 kubelet[2830]: W1104 23:54:41.956822 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.957447 kubelet[2830]: E1104 23:54:41.956840 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.957447 kubelet[2830]: E1104 23:54:41.957295 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.957447 kubelet[2830]: W1104 23:54:41.957308 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.957447 kubelet[2830]: E1104 23:54:41.957323 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.957963 kubelet[2830]: E1104 23:54:41.957797 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.957963 kubelet[2830]: W1104 23:54:41.957813 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.957963 kubelet[2830]: E1104 23:54:41.957827 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.958334 kubelet[2830]: E1104 23:54:41.958267 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.958334 kubelet[2830]: W1104 23:54:41.958284 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.958334 kubelet[2830]: E1104 23:54:41.958298 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.960410 kubelet[2830]: E1104 23:54:41.958841 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.960410 kubelet[2830]: W1104 23:54:41.958859 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.960410 kubelet[2830]: E1104 23:54:41.958874 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.988639 kubelet[2830]: E1104 23:54:41.988605 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.988639 kubelet[2830]: W1104 23:54:41.988631 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.989125 kubelet[2830]: E1104 23:54:41.989060 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.989366 kubelet[2830]: I1104 23:54:41.989236 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/907a3d1f-a9d8-4fa7-9529-2703403b5056-registration-dir\") pod \"csi-node-driver-2h4kv\" (UID: \"907a3d1f-a9d8-4fa7-9529-2703403b5056\") " pod="calico-system/csi-node-driver-2h4kv" Nov 4 23:54:41.989588 kubelet[2830]: E1104 23:54:41.989570 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.989809 kubelet[2830]: W1104 23:54:41.989694 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.989809 kubelet[2830]: E1104 23:54:41.989716 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.990193 kubelet[2830]: E1104 23:54:41.990176 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.990326 kubelet[2830]: W1104 23:54:41.990278 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.990326 kubelet[2830]: E1104 23:54:41.990299 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.991600 kubelet[2830]: E1104 23:54:41.991545 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.991600 kubelet[2830]: W1104 23:54:41.991577 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.991915 kubelet[2830]: E1104 23:54:41.991607 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.991915 kubelet[2830]: I1104 23:54:41.991641 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/907a3d1f-a9d8-4fa7-9529-2703403b5056-varrun\") pod \"csi-node-driver-2h4kv\" (UID: \"907a3d1f-a9d8-4fa7-9529-2703403b5056\") " pod="calico-system/csi-node-driver-2h4kv" Nov 4 23:54:41.992602 containerd[1598]: time="2025-11-04T23:54:41.992554737Z" level=info msg="connecting to shim 7a0d1ccc5132c9a10b960d9e18739b3b1e53c223d91b4bf98e01e543703836f2" address="unix:///run/containerd/s/539f8c6abb6cf16631aa9c77583c23edf03fbf9853ec1aebb44f3b3432d57d15" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:54:41.993814 kubelet[2830]: E1104 23:54:41.993794 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.993968 kubelet[2830]: W1104 23:54:41.993952 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.994055 kubelet[2830]: E1104 23:54:41.994044 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.994278 kubelet[2830]: E1104 23:54:41.994267 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.994528 kubelet[2830]: W1104 23:54:41.994398 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.994528 kubelet[2830]: E1104 23:54:41.994416 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.994650 kubelet[2830]: E1104 23:54:41.994641 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.994798 kubelet[2830]: W1104 23:54:41.994785 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.994848 kubelet[2830]: E1104 23:54:41.994840 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.994923 kubelet[2830]: I1104 23:54:41.994909 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/907a3d1f-a9d8-4fa7-9529-2703403b5056-socket-dir\") pod \"csi-node-driver-2h4kv\" (UID: \"907a3d1f-a9d8-4fa7-9529-2703403b5056\") " pod="calico-system/csi-node-driver-2h4kv" Nov 4 23:54:41.995192 kubelet[2830]: E1104 23:54:41.995178 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.995327 kubelet[2830]: W1104 23:54:41.995255 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.995327 kubelet[2830]: E1104 23:54:41.995268 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.995327 kubelet[2830]: I1104 23:54:41.995296 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sssdh\" (UniqueName: \"kubernetes.io/projected/907a3d1f-a9d8-4fa7-9529-2703403b5056-kube-api-access-sssdh\") pod \"csi-node-driver-2h4kv\" (UID: \"907a3d1f-a9d8-4fa7-9529-2703403b5056\") " pod="calico-system/csi-node-driver-2h4kv" Nov 4 23:54:41.995812 kubelet[2830]: E1104 23:54:41.995789 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.995812 kubelet[2830]: W1104 23:54:41.995811 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.995891 kubelet[2830]: E1104 23:54:41.995828 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.996197 kubelet[2830]: E1104 23:54:41.996181 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.996248 kubelet[2830]: W1104 23:54:41.996197 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.996248 kubelet[2830]: E1104 23:54:41.996213 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.998028 kubelet[2830]: E1104 23:54:41.998008 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.998028 kubelet[2830]: W1104 23:54:41.998025 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.998138 kubelet[2830]: E1104 23:54:41.998044 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:41.998138 kubelet[2830]: I1104 23:54:41.998078 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/907a3d1f-a9d8-4fa7-9529-2703403b5056-kubelet-dir\") pod \"csi-node-driver-2h4kv\" (UID: \"907a3d1f-a9d8-4fa7-9529-2703403b5056\") " pod="calico-system/csi-node-driver-2h4kv" Nov 4 23:54:41.999778 kubelet[2830]: E1104 23:54:41.999719 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:41.999778 kubelet[2830]: W1104 23:54:41.999755 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:41.999778 kubelet[2830]: E1104 23:54:41.999771 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.000043 kubelet[2830]: E1104 23:54:42.000022 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.000043 kubelet[2830]: W1104 23:54:42.000037 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.000153 kubelet[2830]: E1104 23:54:42.000051 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.001827 kubelet[2830]: E1104 23:54:42.001793 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.001827 kubelet[2830]: W1104 23:54:42.001818 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.002004 kubelet[2830]: E1104 23:54:42.001837 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.002170 kubelet[2830]: E1104 23:54:42.002151 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.002209 kubelet[2830]: W1104 23:54:42.002169 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.002209 kubelet[2830]: E1104 23:54:42.002183 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.045892 systemd[1]: Started cri-containerd-7a0d1ccc5132c9a10b960d9e18739b3b1e53c223d91b4bf98e01e543703836f2.scope - libcontainer container 7a0d1ccc5132c9a10b960d9e18739b3b1e53c223d91b4bf98e01e543703836f2. Nov 4 23:54:42.099960 kubelet[2830]: E1104 23:54:42.099245 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.099960 kubelet[2830]: W1104 23:54:42.099265 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.099960 kubelet[2830]: E1104 23:54:42.099285 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.099960 kubelet[2830]: E1104 23:54:42.099862 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.099960 kubelet[2830]: W1104 23:54:42.099879 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.099960 kubelet[2830]: E1104 23:54:42.099897 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.100332 kubelet[2830]: E1104 23:54:42.100208 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.100332 kubelet[2830]: W1104 23:54:42.100217 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.100332 kubelet[2830]: E1104 23:54:42.100228 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.100469 kubelet[2830]: E1104 23:54:42.100405 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.100469 kubelet[2830]: W1104 23:54:42.100412 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.100469 kubelet[2830]: E1104 23:54:42.100419 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.100652 kubelet[2830]: E1104 23:54:42.100576 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.100652 kubelet[2830]: W1104 23:54:42.100582 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.100652 kubelet[2830]: E1104 23:54:42.100589 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.106036 kubelet[2830]: E1104 23:54:42.101096 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.106036 kubelet[2830]: W1104 23:54:42.101105 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.106036 kubelet[2830]: E1104 23:54:42.101152 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.106036 kubelet[2830]: E1104 23:54:42.101381 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.106036 kubelet[2830]: W1104 23:54:42.101389 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.106036 kubelet[2830]: E1104 23:54:42.101436 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.106036 kubelet[2830]: E1104 23:54:42.101614 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.106036 kubelet[2830]: W1104 23:54:42.101622 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.106036 kubelet[2830]: E1104 23:54:42.101630 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.106036 kubelet[2830]: E1104 23:54:42.102149 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.106984 kubelet[2830]: W1104 23:54:42.102234 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.106984 kubelet[2830]: E1104 23:54:42.102249 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.106984 kubelet[2830]: E1104 23:54:42.102651 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.106984 kubelet[2830]: W1104 23:54:42.102685 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.106984 kubelet[2830]: E1104 23:54:42.102697 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.106984 kubelet[2830]: E1104 23:54:42.102872 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.106984 kubelet[2830]: W1104 23:54:42.102895 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.106984 kubelet[2830]: E1104 23:54:42.102904 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.106984 kubelet[2830]: E1104 23:54:42.103115 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.106984 kubelet[2830]: W1104 23:54:42.103122 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.107464 kubelet[2830]: E1104 23:54:42.103131 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.107464 kubelet[2830]: E1104 23:54:42.103399 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.107464 kubelet[2830]: W1104 23:54:42.103411 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.107464 kubelet[2830]: E1104 23:54:42.103422 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.107464 kubelet[2830]: E1104 23:54:42.103795 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.107464 kubelet[2830]: W1104 23:54:42.103805 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.107464 kubelet[2830]: E1104 23:54:42.103815 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.107464 kubelet[2830]: E1104 23:54:42.104039 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.107464 kubelet[2830]: W1104 23:54:42.104047 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.107464 kubelet[2830]: E1104 23:54:42.104055 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.110856 kubelet[2830]: E1104 23:54:42.104320 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.110856 kubelet[2830]: W1104 23:54:42.104328 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.110856 kubelet[2830]: E1104 23:54:42.104339 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.110856 kubelet[2830]: E1104 23:54:42.104854 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.110856 kubelet[2830]: W1104 23:54:42.104864 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.110856 kubelet[2830]: E1104 23:54:42.104874 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.110856 kubelet[2830]: E1104 23:54:42.105379 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.110856 kubelet[2830]: W1104 23:54:42.105392 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.110856 kubelet[2830]: E1104 23:54:42.105404 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.110856 kubelet[2830]: E1104 23:54:42.105894 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.111566 kubelet[2830]: W1104 23:54:42.105904 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.111566 kubelet[2830]: E1104 23:54:42.105915 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.111566 kubelet[2830]: E1104 23:54:42.106180 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.111566 kubelet[2830]: W1104 23:54:42.106217 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.111566 kubelet[2830]: E1104 23:54:42.106226 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.111566 kubelet[2830]: E1104 23:54:42.106427 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.111566 kubelet[2830]: W1104 23:54:42.106436 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.111566 kubelet[2830]: E1104 23:54:42.106445 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.111566 kubelet[2830]: E1104 23:54:42.109804 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.111566 kubelet[2830]: W1104 23:54:42.109825 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.114255 kubelet[2830]: E1104 23:54:42.109862 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.114255 kubelet[2830]: E1104 23:54:42.110164 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.114255 kubelet[2830]: W1104 23:54:42.110177 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.114255 kubelet[2830]: E1104 23:54:42.110193 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.114255 kubelet[2830]: E1104 23:54:42.112168 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.114255 kubelet[2830]: W1104 23:54:42.112187 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.114255 kubelet[2830]: E1104 23:54:42.112205 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.114255 kubelet[2830]: E1104 23:54:42.112528 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.114255 kubelet[2830]: W1104 23:54:42.112542 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.114255 kubelet[2830]: E1104 23:54:42.112557 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.135812 kubelet[2830]: E1104 23:54:42.135772 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:42.135812 kubelet[2830]: W1104 23:54:42.135796 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:42.135812 kubelet[2830]: E1104 23:54:42.135821 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:42.141764 containerd[1598]: time="2025-11-04T23:54:42.141593603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fc6b45c65-wvbm8,Uid:c1bf05f2-651e-49c0-ad5a-59944cb07ac1,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b3fed6139f8420da181eb2c7c02817851dc7241c69edd49e8284fd10e7bc5c3\"" Nov 4 23:54:42.143280 kubelet[2830]: E1104 23:54:42.143244 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:42.148624 containerd[1598]: time="2025-11-04T23:54:42.148489442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 4 23:54:42.176221 containerd[1598]: time="2025-11-04T23:54:42.176159084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v9r9l,Uid:89316db5-7ecb-4b56-9f84-f98eef21a01a,Namespace:calico-system,Attempt:0,} returns sandbox id \"7a0d1ccc5132c9a10b960d9e18739b3b1e53c223d91b4bf98e01e543703836f2\"" Nov 4 23:54:42.177837 kubelet[2830]: E1104 23:54:42.177810 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:43.410250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1762575879.mount: Deactivated successfully. Nov 4 23:54:43.632577 kubelet[2830]: E1104 23:54:43.632493 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2h4kv" podUID="907a3d1f-a9d8-4fa7-9529-2703403b5056" Nov 4 23:54:44.232138 containerd[1598]: time="2025-11-04T23:54:44.232051910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:44.233623 containerd[1598]: time="2025-11-04T23:54:44.233364356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 4 23:54:44.234857 containerd[1598]: time="2025-11-04T23:54:44.234771457Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:44.239507 containerd[1598]: time="2025-11-04T23:54:44.239450951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:44.240417 containerd[1598]: time="2025-11-04T23:54:44.239877532Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.09134327s" Nov 4 23:54:44.240417 containerd[1598]: time="2025-11-04T23:54:44.239919092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 4 23:54:44.243319 containerd[1598]: time="2025-11-04T23:54:44.243275382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 4 23:54:44.291694 containerd[1598]: time="2025-11-04T23:54:44.290391273Z" level=info msg="CreateContainer within sandbox \"7b3fed6139f8420da181eb2c7c02817851dc7241c69edd49e8284fd10e7bc5c3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 4 23:54:44.298506 containerd[1598]: time="2025-11-04T23:54:44.298360136Z" level=info msg="Container f4b4564045b21e492b7077ca8a65510034b5f25cbf10eceb003c8a85c3a81e84: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:44.347727 containerd[1598]: time="2025-11-04T23:54:44.347608990Z" level=info msg="CreateContainer within sandbox \"7b3fed6139f8420da181eb2c7c02817851dc7241c69edd49e8284fd10e7bc5c3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f4b4564045b21e492b7077ca8a65510034b5f25cbf10eceb003c8a85c3a81e84\"" Nov 4 23:54:44.349695 containerd[1598]: time="2025-11-04T23:54:44.348603733Z" level=info msg="StartContainer for \"f4b4564045b21e492b7077ca8a65510034b5f25cbf10eceb003c8a85c3a81e84\"" Nov 4 23:54:44.349982 containerd[1598]: time="2025-11-04T23:54:44.349934096Z" level=info msg="connecting to shim f4b4564045b21e492b7077ca8a65510034b5f25cbf10eceb003c8a85c3a81e84" address="unix:///run/containerd/s/31688c5e1a48c1df2b80e31ffcda335d67ea328d2b67ce1c9fd3a7cc1cd13b01" protocol=ttrpc version=3 Nov 4 23:54:44.382967 systemd[1]: Started cri-containerd-f4b4564045b21e492b7077ca8a65510034b5f25cbf10eceb003c8a85c3a81e84.scope - libcontainer container f4b4564045b21e492b7077ca8a65510034b5f25cbf10eceb003c8a85c3a81e84. Nov 4 23:54:44.461939 containerd[1598]: time="2025-11-04T23:54:44.461855456Z" level=info msg="StartContainer for \"f4b4564045b21e492b7077ca8a65510034b5f25cbf10eceb003c8a85c3a81e84\" returns successfully" Nov 4 23:54:44.788349 kubelet[2830]: E1104 23:54:44.788298 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:44.884356 kubelet[2830]: E1104 23:54:44.884181 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.884356 kubelet[2830]: W1104 23:54:44.884208 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.885053 kubelet[2830]: E1104 23:54:44.884967 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.885635 kubelet[2830]: E1104 23:54:44.885540 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.886271 kubelet[2830]: W1104 23:54:44.885565 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.886271 kubelet[2830]: E1104 23:54:44.885992 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.886926 kubelet[2830]: E1104 23:54:44.886761 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.886926 kubelet[2830]: W1104 23:54:44.886779 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.886926 kubelet[2830]: E1104 23:54:44.886798 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.887470 kubelet[2830]: E1104 23:54:44.887363 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.887470 kubelet[2830]: W1104 23:54:44.887379 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.887470 kubelet[2830]: E1104 23:54:44.887397 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.887940 kubelet[2830]: E1104 23:54:44.887862 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.887940 kubelet[2830]: W1104 23:54:44.887878 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.887940 kubelet[2830]: E1104 23:54:44.887892 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.888732 kubelet[2830]: E1104 23:54:44.888700 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.888894 kubelet[2830]: W1104 23:54:44.888810 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.888894 kubelet[2830]: E1104 23:54:44.888830 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.889210 kubelet[2830]: E1104 23:54:44.889192 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.889419 kubelet[2830]: W1104 23:54:44.889293 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.889419 kubelet[2830]: E1104 23:54:44.889313 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.889797 kubelet[2830]: E1104 23:54:44.889778 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.890014 kubelet[2830]: W1104 23:54:44.889835 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.890014 kubelet[2830]: E1104 23:54:44.889854 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.890428 kubelet[2830]: E1104 23:54:44.890412 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.890680 kubelet[2830]: W1104 23:54:44.890561 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.890680 kubelet[2830]: E1104 23:54:44.890585 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.891902 kubelet[2830]: E1104 23:54:44.891866 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.892299 kubelet[2830]: W1104 23:54:44.892021 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.892551 kubelet[2830]: E1104 23:54:44.892380 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.892985 kubelet[2830]: E1104 23:54:44.892971 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.893238 kubelet[2830]: W1104 23:54:44.893102 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.893238 kubelet[2830]: E1104 23:54:44.893126 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.893401 kubelet[2830]: E1104 23:54:44.893391 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.893456 kubelet[2830]: W1104 23:54:44.893447 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.893505 kubelet[2830]: E1104 23:54:44.893496 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.893885 kubelet[2830]: E1104 23:54:44.893773 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.893885 kubelet[2830]: W1104 23:54:44.893788 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.893885 kubelet[2830]: E1104 23:54:44.893799 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.894114 kubelet[2830]: E1104 23:54:44.894103 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.894187 kubelet[2830]: W1104 23:54:44.894177 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.894255 kubelet[2830]: E1104 23:54:44.894245 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.894771 kubelet[2830]: E1104 23:54:44.894682 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.894771 kubelet[2830]: W1104 23:54:44.894694 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.894771 kubelet[2830]: E1104 23:54:44.894705 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.926351 kubelet[2830]: E1104 23:54:44.926203 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.927019 kubelet[2830]: W1104 23:54:44.926234 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.927019 kubelet[2830]: E1104 23:54:44.926541 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.928090 kubelet[2830]: E1104 23:54:44.927983 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.928090 kubelet[2830]: W1104 23:54:44.928006 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.928362 kubelet[2830]: E1104 23:54:44.928025 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.929201 kubelet[2830]: E1104 23:54:44.929118 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.929201 kubelet[2830]: W1104 23:54:44.929133 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.929201 kubelet[2830]: E1104 23:54:44.929152 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.929766 kubelet[2830]: E1104 23:54:44.929694 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.929766 kubelet[2830]: W1104 23:54:44.929740 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.929766 kubelet[2830]: E1104 23:54:44.929752 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.930585 kubelet[2830]: E1104 23:54:44.930366 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.930585 kubelet[2830]: W1104 23:54:44.930382 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.930585 kubelet[2830]: E1104 23:54:44.930397 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.930995 kubelet[2830]: E1104 23:54:44.930919 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.930995 kubelet[2830]: W1104 23:54:44.930934 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.930995 kubelet[2830]: E1104 23:54:44.930946 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.931435 kubelet[2830]: E1104 23:54:44.931326 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.931435 kubelet[2830]: W1104 23:54:44.931342 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.931435 kubelet[2830]: E1104 23:54:44.931356 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.932162 kubelet[2830]: E1104 23:54:44.932043 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.932162 kubelet[2830]: W1104 23:54:44.932059 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.932162 kubelet[2830]: E1104 23:54:44.932074 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.932611 kubelet[2830]: E1104 23:54:44.932522 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.932611 kubelet[2830]: W1104 23:54:44.932534 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.932611 kubelet[2830]: E1104 23:54:44.932551 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.933024 kubelet[2830]: E1104 23:54:44.932959 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.933024 kubelet[2830]: W1104 23:54:44.932971 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.933024 kubelet[2830]: E1104 23:54:44.932982 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.933487 kubelet[2830]: E1104 23:54:44.933419 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.933634 kubelet[2830]: W1104 23:54:44.933432 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.933634 kubelet[2830]: E1104 23:54:44.933545 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.934056 kubelet[2830]: E1104 23:54:44.933954 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.934056 kubelet[2830]: W1104 23:54:44.933979 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.934056 kubelet[2830]: E1104 23:54:44.933991 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.934426 kubelet[2830]: E1104 23:54:44.934413 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.934578 kubelet[2830]: W1104 23:54:44.934467 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.934578 kubelet[2830]: E1104 23:54:44.934480 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.934984 kubelet[2830]: E1104 23:54:44.934970 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.935317 kubelet[2830]: W1104 23:54:44.935157 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.935317 kubelet[2830]: E1104 23:54:44.935174 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.936071 kubelet[2830]: E1104 23:54:44.936007 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.936071 kubelet[2830]: W1104 23:54:44.936020 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.936497 kubelet[2830]: E1104 23:54:44.936031 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.936701 kubelet[2830]: E1104 23:54:44.936689 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.936857 kubelet[2830]: W1104 23:54:44.936842 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.937069 kubelet[2830]: E1104 23:54:44.936969 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.937496 kubelet[2830]: E1104 23:54:44.937376 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.937496 kubelet[2830]: W1104 23:54:44.937412 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.937496 kubelet[2830]: E1104 23:54:44.937424 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:44.937967 kubelet[2830]: E1104 23:54:44.937954 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:44.938124 kubelet[2830]: W1104 23:54:44.938050 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:44.938124 kubelet[2830]: E1104 23:54:44.938090 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.632200 kubelet[2830]: E1104 23:54:45.632072 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2h4kv" podUID="907a3d1f-a9d8-4fa7-9529-2703403b5056" Nov 4 23:54:45.792501 kubelet[2830]: I1104 23:54:45.792445 2830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 23:54:45.793860 kubelet[2830]: E1104 23:54:45.793812 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:45.801400 kubelet[2830]: E1104 23:54:45.801118 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.801400 kubelet[2830]: W1104 23:54:45.801153 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.801400 kubelet[2830]: E1104 23:54:45.801185 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.803228 kubelet[2830]: E1104 23:54:45.803204 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.804517 kubelet[2830]: W1104 23:54:45.804253 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.804517 kubelet[2830]: E1104 23:54:45.804286 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.807564 kubelet[2830]: E1104 23:54:45.807536 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.808062 kubelet[2830]: W1104 23:54:45.807696 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.808062 kubelet[2830]: E1104 23:54:45.807728 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.809693 kubelet[2830]: E1104 23:54:45.808906 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.810042 kubelet[2830]: W1104 23:54:45.809832 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.810042 kubelet[2830]: E1104 23:54:45.809866 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.810360 kubelet[2830]: E1104 23:54:45.810342 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.810565 kubelet[2830]: W1104 23:54:45.810436 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.810565 kubelet[2830]: E1104 23:54:45.810457 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.810959 kubelet[2830]: E1104 23:54:45.810844 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.810959 kubelet[2830]: W1104 23:54:45.810858 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.810959 kubelet[2830]: E1104 23:54:45.810873 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.811362 kubelet[2830]: E1104 23:54:45.811233 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.811362 kubelet[2830]: W1104 23:54:45.811247 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.811362 kubelet[2830]: E1104 23:54:45.811260 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.811764 kubelet[2830]: E1104 23:54:45.811750 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.811934 kubelet[2830]: W1104 23:54:45.811815 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.811934 kubelet[2830]: E1104 23:54:45.811830 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.812298 kubelet[2830]: E1104 23:54:45.812056 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.812298 kubelet[2830]: W1104 23:54:45.812068 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.812298 kubelet[2830]: E1104 23:54:45.812081 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.812653 kubelet[2830]: E1104 23:54:45.812637 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.812906 kubelet[2830]: W1104 23:54:45.812790 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.812906 kubelet[2830]: E1104 23:54:45.812812 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.813299 kubelet[2830]: E1104 23:54:45.813254 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.813299 kubelet[2830]: W1104 23:54:45.813270 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.813518 kubelet[2830]: E1104 23:54:45.813385 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.814067 kubelet[2830]: E1104 23:54:45.813982 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.814067 kubelet[2830]: W1104 23:54:45.813998 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.814067 kubelet[2830]: E1104 23:54:45.814013 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.815877 kubelet[2830]: E1104 23:54:45.815859 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.816076 kubelet[2830]: W1104 23:54:45.815983 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.816076 kubelet[2830]: E1104 23:54:45.816017 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.816839 kubelet[2830]: E1104 23:54:45.816757 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.816839 kubelet[2830]: W1104 23:54:45.816774 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.816839 kubelet[2830]: E1104 23:54:45.816786 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.817755 kubelet[2830]: E1104 23:54:45.817736 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.818225 kubelet[2830]: W1104 23:54:45.817892 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.818225 kubelet[2830]: E1104 23:54:45.817914 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.837542 kubelet[2830]: E1104 23:54:45.837493 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.837893 kubelet[2830]: W1104 23:54:45.837694 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.837893 kubelet[2830]: E1104 23:54:45.837720 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.838287 kubelet[2830]: E1104 23:54:45.838200 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.838287 kubelet[2830]: W1104 23:54:45.838260 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.838287 kubelet[2830]: E1104 23:54:45.838273 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.838794 kubelet[2830]: E1104 23:54:45.838776 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.838929 kubelet[2830]: W1104 23:54:45.838855 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.838929 kubelet[2830]: E1104 23:54:45.838870 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.839478 kubelet[2830]: E1104 23:54:45.839457 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.839560 kubelet[2830]: W1104 23:54:45.839477 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.839560 kubelet[2830]: E1104 23:54:45.839501 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.840234 kubelet[2830]: E1104 23:54:45.840205 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.840383 kubelet[2830]: W1104 23:54:45.840326 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.840383 kubelet[2830]: E1104 23:54:45.840345 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.840784 kubelet[2830]: E1104 23:54:45.840748 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.840784 kubelet[2830]: W1104 23:54:45.840759 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.840784 kubelet[2830]: E1104 23:54:45.840772 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.841312 kubelet[2830]: E1104 23:54:45.841296 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.841459 kubelet[2830]: W1104 23:54:45.841373 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.841459 kubelet[2830]: E1104 23:54:45.841387 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.841911 kubelet[2830]: E1104 23:54:45.841847 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.841911 kubelet[2830]: W1104 23:54:45.841858 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.841911 kubelet[2830]: E1104 23:54:45.841869 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.842353 kubelet[2830]: E1104 23:54:45.842195 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.842353 kubelet[2830]: W1104 23:54:45.842206 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.842353 kubelet[2830]: E1104 23:54:45.842222 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.842893 kubelet[2830]: E1104 23:54:45.842817 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.842893 kubelet[2830]: W1104 23:54:45.842830 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.843353 kubelet[2830]: E1104 23:54:45.842841 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.843634 kubelet[2830]: E1104 23:54:45.843624 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.843972 kubelet[2830]: W1104 23:54:45.843856 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.843972 kubelet[2830]: E1104 23:54:45.843871 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.844115 kubelet[2830]: E1104 23:54:45.844102 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.844164 kubelet[2830]: W1104 23:54:45.844155 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.844244 kubelet[2830]: E1104 23:54:45.844231 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.845540 kubelet[2830]: E1104 23:54:45.845376 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.845540 kubelet[2830]: W1104 23:54:45.845400 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.845540 kubelet[2830]: E1104 23:54:45.845412 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.846061 kubelet[2830]: E1104 23:54:45.845944 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.846061 kubelet[2830]: W1104 23:54:45.845955 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.846061 kubelet[2830]: E1104 23:54:45.845966 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.847010 kubelet[2830]: E1104 23:54:45.846968 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.847010 kubelet[2830]: W1104 23:54:45.846981 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.847010 kubelet[2830]: E1104 23:54:45.846993 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.847443 kubelet[2830]: E1104 23:54:45.847417 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.848002 kubelet[2830]: W1104 23:54:45.847853 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.848002 kubelet[2830]: E1104 23:54:45.847882 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.848363 kubelet[2830]: E1104 23:54:45.848350 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.848443 kubelet[2830]: W1104 23:54:45.848432 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.848588 kubelet[2830]: E1104 23:54:45.848572 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.848894 kubelet[2830]: E1104 23:54:45.848882 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:54:45.848971 kubelet[2830]: W1104 23:54:45.848961 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:54:45.849756 kubelet[2830]: E1104 23:54:45.849696 2830 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:54:45.855028 containerd[1598]: time="2025-11-04T23:54:45.854977101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:45.855851 containerd[1598]: time="2025-11-04T23:54:45.855713836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 4 23:54:45.856711 containerd[1598]: time="2025-11-04T23:54:45.856649472Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:45.858407 containerd[1598]: time="2025-11-04T23:54:45.858355409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:45.859442 containerd[1598]: time="2025-11-04T23:54:45.858999466Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.614746346s" Nov 4 23:54:45.859442 containerd[1598]: time="2025-11-04T23:54:45.859035969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 4 23:54:45.864420 containerd[1598]: time="2025-11-04T23:54:45.864371962Z" level=info msg="CreateContainer within sandbox \"7a0d1ccc5132c9a10b960d9e18739b3b1e53c223d91b4bf98e01e543703836f2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 4 23:54:45.898696 containerd[1598]: time="2025-11-04T23:54:45.896451831Z" level=info msg="Container e542d8fc0b54c8b6adabbaca98361c546b70ee4d3097f497c66ea4404460931a: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:45.899079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1477323477.mount: Deactivated successfully. Nov 4 23:54:45.908647 containerd[1598]: time="2025-11-04T23:54:45.908580762Z" level=info msg="CreateContainer within sandbox \"7a0d1ccc5132c9a10b960d9e18739b3b1e53c223d91b4bf98e01e543703836f2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e542d8fc0b54c8b6adabbaca98361c546b70ee4d3097f497c66ea4404460931a\"" Nov 4 23:54:45.911717 containerd[1598]: time="2025-11-04T23:54:45.909884266Z" level=info msg="StartContainer for \"e542d8fc0b54c8b6adabbaca98361c546b70ee4d3097f497c66ea4404460931a\"" Nov 4 23:54:45.913951 containerd[1598]: time="2025-11-04T23:54:45.913828900Z" level=info msg="connecting to shim e542d8fc0b54c8b6adabbaca98361c546b70ee4d3097f497c66ea4404460931a" address="unix:///run/containerd/s/539f8c6abb6cf16631aa9c77583c23edf03fbf9853ec1aebb44f3b3432d57d15" protocol=ttrpc version=3 Nov 4 23:54:45.959558 systemd[1]: Started cri-containerd-e542d8fc0b54c8b6adabbaca98361c546b70ee4d3097f497c66ea4404460931a.scope - libcontainer container e542d8fc0b54c8b6adabbaca98361c546b70ee4d3097f497c66ea4404460931a. Nov 4 23:54:46.043442 containerd[1598]: time="2025-11-04T23:54:46.043390990Z" level=info msg="StartContainer for \"e542d8fc0b54c8b6adabbaca98361c546b70ee4d3097f497c66ea4404460931a\" returns successfully" Nov 4 23:54:46.063112 systemd[1]: cri-containerd-e542d8fc0b54c8b6adabbaca98361c546b70ee4d3097f497c66ea4404460931a.scope: Deactivated successfully. Nov 4 23:54:46.094778 containerd[1598]: time="2025-11-04T23:54:46.094730914Z" level=info msg="received exit event container_id:\"e542d8fc0b54c8b6adabbaca98361c546b70ee4d3097f497c66ea4404460931a\" id:\"e542d8fc0b54c8b6adabbaca98361c546b70ee4d3097f497c66ea4404460931a\" pid:3541 exited_at:{seconds:1762300486 nanos:65625973}" Nov 4 23:54:46.125797 containerd[1598]: time="2025-11-04T23:54:46.125740872Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e542d8fc0b54c8b6adabbaca98361c546b70ee4d3097f497c66ea4404460931a\" id:\"e542d8fc0b54c8b6adabbaca98361c546b70ee4d3097f497c66ea4404460931a\" pid:3541 exited_at:{seconds:1762300486 nanos:65625973}" Nov 4 23:54:46.149567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e542d8fc0b54c8b6adabbaca98361c546b70ee4d3097f497c66ea4404460931a-rootfs.mount: Deactivated successfully. Nov 4 23:54:46.797051 kubelet[2830]: E1104 23:54:46.797001 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:46.799591 containerd[1598]: time="2025-11-04T23:54:46.799549895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 4 23:54:46.831541 kubelet[2830]: I1104 23:54:46.829938 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5fc6b45c65-wvbm8" podStartSLOduration=3.734050969 podStartE2EDuration="5.829918646s" podCreationTimestamp="2025-11-04 23:54:41 +0000 UTC" firstStartedPulling="2025-11-04 23:54:42.146599011 +0000 UTC m=+23.726073385" lastFinishedPulling="2025-11-04 23:54:44.242466675 +0000 UTC m=+25.821941062" observedRunningTime="2025-11-04 23:54:44.812766042 +0000 UTC m=+26.392240436" watchObservedRunningTime="2025-11-04 23:54:46.829918646 +0000 UTC m=+28.409393041" Nov 4 23:54:47.633415 kubelet[2830]: E1104 23:54:47.632957 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2h4kv" podUID="907a3d1f-a9d8-4fa7-9529-2703403b5056" Nov 4 23:54:49.632414 kubelet[2830]: E1104 23:54:49.632151 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2h4kv" podUID="907a3d1f-a9d8-4fa7-9529-2703403b5056" Nov 4 23:54:49.963444 containerd[1598]: time="2025-11-04T23:54:49.963370381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 4 23:54:49.964920 containerd[1598]: time="2025-11-04T23:54:49.963468785Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:49.966108 containerd[1598]: time="2025-11-04T23:54:49.966073135Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:49.966906 containerd[1598]: time="2025-11-04T23:54:49.966875838Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.167083616s" Nov 4 23:54:49.967024 containerd[1598]: time="2025-11-04T23:54:49.967010462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 4 23:54:49.967451 containerd[1598]: time="2025-11-04T23:54:49.967423434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:49.973528 containerd[1598]: time="2025-11-04T23:54:49.973470945Z" level=info msg="CreateContainer within sandbox \"7a0d1ccc5132c9a10b960d9e18739b3b1e53c223d91b4bf98e01e543703836f2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 4 23:54:49.984697 containerd[1598]: time="2025-11-04T23:54:49.981406491Z" level=info msg="Container e17d786cfb23328fd1ef63dd4f4dd1b30939a8d73d0280a817e4343992c8b67c: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:49.997997 containerd[1598]: time="2025-11-04T23:54:49.997932116Z" level=info msg="CreateContainer within sandbox \"7a0d1ccc5132c9a10b960d9e18739b3b1e53c223d91b4bf98e01e543703836f2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e17d786cfb23328fd1ef63dd4f4dd1b30939a8d73d0280a817e4343992c8b67c\"" Nov 4 23:54:49.999202 containerd[1598]: time="2025-11-04T23:54:49.998887157Z" level=info msg="StartContainer for \"e17d786cfb23328fd1ef63dd4f4dd1b30939a8d73d0280a817e4343992c8b67c\"" Nov 4 23:54:50.002069 containerd[1598]: time="2025-11-04T23:54:50.001838365Z" level=info msg="connecting to shim e17d786cfb23328fd1ef63dd4f4dd1b30939a8d73d0280a817e4343992c8b67c" address="unix:///run/containerd/s/539f8c6abb6cf16631aa9c77583c23edf03fbf9853ec1aebb44f3b3432d57d15" protocol=ttrpc version=3 Nov 4 23:54:50.031948 systemd[1]: Started cri-containerd-e17d786cfb23328fd1ef63dd4f4dd1b30939a8d73d0280a817e4343992c8b67c.scope - libcontainer container e17d786cfb23328fd1ef63dd4f4dd1b30939a8d73d0280a817e4343992c8b67c. Nov 4 23:54:50.090545 containerd[1598]: time="2025-11-04T23:54:50.090469617Z" level=info msg="StartContainer for \"e17d786cfb23328fd1ef63dd4f4dd1b30939a8d73d0280a817e4343992c8b67c\" returns successfully" Nov 4 23:54:50.726010 systemd[1]: cri-containerd-e17d786cfb23328fd1ef63dd4f4dd1b30939a8d73d0280a817e4343992c8b67c.scope: Deactivated successfully. Nov 4 23:54:50.726482 systemd[1]: cri-containerd-e17d786cfb23328fd1ef63dd4f4dd1b30939a8d73d0280a817e4343992c8b67c.scope: Consumed 679ms CPU time, 173.8M memory peak, 8.2M read from disk, 171.3M written to disk. Nov 4 23:54:50.793226 containerd[1598]: time="2025-11-04T23:54:50.793173370Z" level=info msg="received exit event container_id:\"e17d786cfb23328fd1ef63dd4f4dd1b30939a8d73d0280a817e4343992c8b67c\" id:\"e17d786cfb23328fd1ef63dd4f4dd1b30939a8d73d0280a817e4343992c8b67c\" pid:3599 exited_at:{seconds:1762300490 nanos:778095754}" Nov 4 23:54:50.796683 containerd[1598]: time="2025-11-04T23:54:50.796608903Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e17d786cfb23328fd1ef63dd4f4dd1b30939a8d73d0280a817e4343992c8b67c\" id:\"e17d786cfb23328fd1ef63dd4f4dd1b30939a8d73d0280a817e4343992c8b67c\" pid:3599 exited_at:{seconds:1762300490 nanos:778095754}" Nov 4 23:54:50.824514 kubelet[2830]: E1104 23:54:50.822880 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:50.914035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e17d786cfb23328fd1ef63dd4f4dd1b30939a8d73d0280a817e4343992c8b67c-rootfs.mount: Deactivated successfully. Nov 4 23:54:50.931137 kubelet[2830]: I1104 23:54:50.918169 2830 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 4 23:54:50.989529 systemd[1]: Created slice kubepods-burstable-pod26c41d20_6d80_4a04_a264_3af62c30ea8d.slice - libcontainer container kubepods-burstable-pod26c41d20_6d80_4a04_a264_3af62c30ea8d.slice. Nov 4 23:54:51.011875 systemd[1]: Created slice kubepods-besteffort-podd9398051_b480_403c_90d7_54aa5426da90.slice - libcontainer container kubepods-besteffort-podd9398051_b480_403c_90d7_54aa5426da90.slice. Nov 4 23:54:51.029421 systemd[1]: Created slice kubepods-besteffort-pod24a3a22d_e704_4f02_8408_ca1de6f232f0.slice - libcontainer container kubepods-besteffort-pod24a3a22d_e704_4f02_8408_ca1de6f232f0.slice. Nov 4 23:54:51.041472 systemd[1]: Created slice kubepods-besteffort-pod6addfd64_c562_4f9f_bb9f_581ad89a73d8.slice - libcontainer container kubepods-besteffort-pod6addfd64_c562_4f9f_bb9f_581ad89a73d8.slice. Nov 4 23:54:51.052407 systemd[1]: Created slice kubepods-besteffort-pod92120656_4e9c_41d6_aa85_513f1a7aea60.slice - libcontainer container kubepods-besteffort-pod92120656_4e9c_41d6_aa85_513f1a7aea60.slice. Nov 4 23:54:51.067725 systemd[1]: Created slice kubepods-besteffort-pod72fd9e54_497f_4204_80c1_9f81d06cb75e.slice - libcontainer container kubepods-besteffort-pod72fd9e54_497f_4204_80c1_9f81d06cb75e.slice. Nov 4 23:54:51.078532 systemd[1]: Created slice kubepods-burstable-poded949e47_9a3d_4f40_9a33_857a53b3cf70.slice - libcontainer container kubepods-burstable-poded949e47_9a3d_4f40_9a33_857a53b3cf70.slice. Nov 4 23:54:51.082706 kubelet[2830]: I1104 23:54:51.082637 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl2fn\" (UniqueName: \"kubernetes.io/projected/6addfd64-c562-4f9f-bb9f-581ad89a73d8-kube-api-access-zl2fn\") pod \"calico-apiserver-577ff57f97-lhcd2\" (UID: \"6addfd64-c562-4f9f-bb9f-581ad89a73d8\") " pod="calico-apiserver/calico-apiserver-577ff57f97-lhcd2" Nov 4 23:54:51.083478 kubelet[2830]: I1104 23:54:51.083444 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5zmv\" (UniqueName: \"kubernetes.io/projected/ed949e47-9a3d-4f40-9a33-857a53b3cf70-kube-api-access-r5zmv\") pod \"coredns-66bc5c9577-fsz2q\" (UID: \"ed949e47-9a3d-4f40-9a33-857a53b3cf70\") " pod="kube-system/coredns-66bc5c9577-fsz2q" Nov 4 23:54:51.085960 kubelet[2830]: I1104 23:54:51.083924 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/72fd9e54-497f-4204-80c1-9f81d06cb75e-calico-apiserver-certs\") pod \"calico-apiserver-577ff57f97-8frfn\" (UID: \"72fd9e54-497f-4204-80c1-9f81d06cb75e\") " pod="calico-apiserver/calico-apiserver-577ff57f97-8frfn" Nov 4 23:54:51.085960 kubelet[2830]: I1104 23:54:51.084009 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnlzb\" (UniqueName: \"kubernetes.io/projected/92120656-4e9c-41d6-aa85-513f1a7aea60-kube-api-access-qnlzb\") pod \"calico-kube-controllers-7bc8f8875-8jrl6\" (UID: \"92120656-4e9c-41d6-aa85-513f1a7aea60\") " pod="calico-system/calico-kube-controllers-7bc8f8875-8jrl6" Nov 4 23:54:51.085960 kubelet[2830]: I1104 23:54:51.084040 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6lpg\" (UniqueName: \"kubernetes.io/projected/26c41d20-6d80-4a04-a264-3af62c30ea8d-kube-api-access-j6lpg\") pod \"coredns-66bc5c9577-2npxc\" (UID: \"26c41d20-6d80-4a04-a264-3af62c30ea8d\") " pod="kube-system/coredns-66bc5c9577-2npxc" Nov 4 23:54:51.085960 kubelet[2830]: I1104 23:54:51.084071 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njxb4\" (UniqueName: \"kubernetes.io/projected/d9398051-b480-403c-90d7-54aa5426da90-kube-api-access-njxb4\") pod \"whisker-686857b795-6rtmr\" (UID: \"d9398051-b480-403c-90d7-54aa5426da90\") " pod="calico-system/whisker-686857b795-6rtmr" Nov 4 23:54:51.085960 kubelet[2830]: I1104 23:54:51.084099 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9398051-b480-403c-90d7-54aa5426da90-whisker-ca-bundle\") pod \"whisker-686857b795-6rtmr\" (UID: \"d9398051-b480-403c-90d7-54aa5426da90\") " pod="calico-system/whisker-686857b795-6rtmr" Nov 4 23:54:51.086352 kubelet[2830]: I1104 23:54:51.084134 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6addfd64-c562-4f9f-bb9f-581ad89a73d8-calico-apiserver-certs\") pod \"calico-apiserver-577ff57f97-lhcd2\" (UID: \"6addfd64-c562-4f9f-bb9f-581ad89a73d8\") " pod="calico-apiserver/calico-apiserver-577ff57f97-lhcd2" Nov 4 23:54:51.086352 kubelet[2830]: I1104 23:54:51.084158 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed949e47-9a3d-4f40-9a33-857a53b3cf70-config-volume\") pod \"coredns-66bc5c9577-fsz2q\" (UID: \"ed949e47-9a3d-4f40-9a33-857a53b3cf70\") " pod="kube-system/coredns-66bc5c9577-fsz2q" Nov 4 23:54:51.086352 kubelet[2830]: I1104 23:54:51.084186 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26c41d20-6d80-4a04-a264-3af62c30ea8d-config-volume\") pod \"coredns-66bc5c9577-2npxc\" (UID: \"26c41d20-6d80-4a04-a264-3af62c30ea8d\") " pod="kube-system/coredns-66bc5c9577-2npxc" Nov 4 23:54:51.086352 kubelet[2830]: I1104 23:54:51.084213 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/24a3a22d-e704-4f02-8408-ca1de6f232f0-goldmane-key-pair\") pod \"goldmane-7c778bb748-hx5ms\" (UID: \"24a3a22d-e704-4f02-8408-ca1de6f232f0\") " pod="calico-system/goldmane-7c778bb748-hx5ms" Nov 4 23:54:51.086352 kubelet[2830]: I1104 23:54:51.084246 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92120656-4e9c-41d6-aa85-513f1a7aea60-tigera-ca-bundle\") pod \"calico-kube-controllers-7bc8f8875-8jrl6\" (UID: \"92120656-4e9c-41d6-aa85-513f1a7aea60\") " pod="calico-system/calico-kube-controllers-7bc8f8875-8jrl6" Nov 4 23:54:51.086989 kubelet[2830]: I1104 23:54:51.084277 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d9398051-b480-403c-90d7-54aa5426da90-whisker-backend-key-pair\") pod \"whisker-686857b795-6rtmr\" (UID: \"d9398051-b480-403c-90d7-54aa5426da90\") " pod="calico-system/whisker-686857b795-6rtmr" Nov 4 23:54:51.086989 kubelet[2830]: I1104 23:54:51.084307 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a3a22d-e704-4f02-8408-ca1de6f232f0-config\") pod \"goldmane-7c778bb748-hx5ms\" (UID: \"24a3a22d-e704-4f02-8408-ca1de6f232f0\") " pod="calico-system/goldmane-7c778bb748-hx5ms" Nov 4 23:54:51.086989 kubelet[2830]: I1104 23:54:51.084330 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24a3a22d-e704-4f02-8408-ca1de6f232f0-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-hx5ms\" (UID: \"24a3a22d-e704-4f02-8408-ca1de6f232f0\") " pod="calico-system/goldmane-7c778bb748-hx5ms" Nov 4 23:54:51.086989 kubelet[2830]: I1104 23:54:51.084355 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4dp6\" (UniqueName: \"kubernetes.io/projected/24a3a22d-e704-4f02-8408-ca1de6f232f0-kube-api-access-z4dp6\") pod \"goldmane-7c778bb748-hx5ms\" (UID: \"24a3a22d-e704-4f02-8408-ca1de6f232f0\") " pod="calico-system/goldmane-7c778bb748-hx5ms" Nov 4 23:54:51.086989 kubelet[2830]: I1104 23:54:51.084384 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgtl8\" (UniqueName: \"kubernetes.io/projected/72fd9e54-497f-4204-80c1-9f81d06cb75e-kube-api-access-pgtl8\") pod \"calico-apiserver-577ff57f97-8frfn\" (UID: \"72fd9e54-497f-4204-80c1-9f81d06cb75e\") " pod="calico-apiserver/calico-apiserver-577ff57f97-8frfn" Nov 4 23:54:51.298374 kubelet[2830]: E1104 23:54:51.298224 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:51.310862 containerd[1598]: time="2025-11-04T23:54:51.310806804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2npxc,Uid:26c41d20-6d80-4a04-a264-3af62c30ea8d,Namespace:kube-system,Attempt:0,}" Nov 4 23:54:51.329875 containerd[1598]: time="2025-11-04T23:54:51.329816550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-686857b795-6rtmr,Uid:d9398051-b480-403c-90d7-54aa5426da90,Namespace:calico-system,Attempt:0,}" Nov 4 23:54:51.364756 containerd[1598]: time="2025-11-04T23:54:51.362457222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-577ff57f97-lhcd2,Uid:6addfd64-c562-4f9f-bb9f-581ad89a73d8,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:54:51.368942 containerd[1598]: time="2025-11-04T23:54:51.368751875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bc8f8875-8jrl6,Uid:92120656-4e9c-41d6-aa85-513f1a7aea60,Namespace:calico-system,Attempt:0,}" Nov 4 23:54:51.384291 containerd[1598]: time="2025-11-04T23:54:51.384188987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-577ff57f97-8frfn,Uid:72fd9e54-497f-4204-80c1-9f81d06cb75e,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:54:51.391169 kubelet[2830]: E1104 23:54:51.391006 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:51.393211 containerd[1598]: time="2025-11-04T23:54:51.393152031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-hx5ms,Uid:24a3a22d-e704-4f02-8408-ca1de6f232f0,Namespace:calico-system,Attempt:0,}" Nov 4 23:54:51.404436 containerd[1598]: time="2025-11-04T23:54:51.404378917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fsz2q,Uid:ed949e47-9a3d-4f40-9a33-857a53b3cf70,Namespace:kube-system,Attempt:0,}" Nov 4 23:54:51.641950 systemd[1]: Created slice kubepods-besteffort-pod907a3d1f_a9d8_4fa7_9529_2703403b5056.slice - libcontainer container kubepods-besteffort-pod907a3d1f_a9d8_4fa7_9529_2703403b5056.slice. Nov 4 23:54:51.651458 containerd[1598]: time="2025-11-04T23:54:51.651161968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2h4kv,Uid:907a3d1f-a9d8-4fa7-9529-2703403b5056,Namespace:calico-system,Attempt:0,}" Nov 4 23:54:51.778192 containerd[1598]: time="2025-11-04T23:54:51.777638764Z" level=error msg="Failed to destroy network for sandbox \"27e83ff5237bfd5046ad472196cd37b05da5d1db9af8200adb754e700cd55c0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:54:51.787141 containerd[1598]: time="2025-11-04T23:54:51.787009800Z" level=error msg="Failed to destroy network for sandbox \"872ee1346f60485230059dff5f49cd47203e39c39f0ae1947bd5173cbdc0d143\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:54:51.787448 containerd[1598]: time="2025-11-04T23:54:51.787331070Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bc8f8875-8jrl6,Uid:92120656-4e9c-41d6-aa85-513f1a7aea60,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"27e83ff5237bfd5046ad472196cd37b05da5d1db9af8200adb754e700cd55c0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:54:51.793408 containerd[1598]: time="2025-11-04T23:54:51.793299894Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2npxc,Uid:26c41d20-6d80-4a04-a264-3af62c30ea8d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"872ee1346f60485230059dff5f49cd47203e39c39f0ae1947bd5173cbdc0d143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:54:51.804883 kubelet[2830]: E1104 23:54:51.804822 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27e83ff5237bfd5046ad472196cd37b05da5d1db9af8200adb754e700cd55c0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:54:51.805114 kubelet[2830]: E1104 23:54:51.804909 2830 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27e83ff5237bfd5046ad472196cd37b05da5d1db9af8200adb754e700cd55c0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bc8f8875-8jrl6" Nov 4 23:54:51.805114 kubelet[2830]: E1104 23:54:51.804955 2830 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27e83ff5237bfd5046ad472196cd37b05da5d1db9af8200adb754e700cd55c0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bc8f8875-8jrl6" Nov 4 23:54:51.805114 kubelet[2830]: E1104 23:54:51.805031 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7bc8f8875-8jrl6_calico-system(92120656-4e9c-41d6-aa85-513f1a7aea60)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7bc8f8875-8jrl6_calico-system(92120656-4e9c-41d6-aa85-513f1a7aea60)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27e83ff5237bfd5046ad472196cd37b05da5d1db9af8200adb754e700cd55c0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bc8f8875-8jrl6" podUID="92120656-4e9c-41d6-aa85-513f1a7aea60" Nov 4 23:54:51.805705 kubelet[2830]: E1104 23:54:51.804677 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"872ee1346f60485230059dff5f49cd47203e39c39f0ae1947bd5173cbdc0d143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:54:51.805705 kubelet[2830]: E1104 23:54:51.805487 2830 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"872ee1346f60485230059dff5f49cd47203e39c39f0ae1947bd5173cbdc0d143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2npxc" Nov 4 23:54:51.805705 kubelet[2830]: E1104 23:54:51.805515 2830 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"872ee1346f60485230059dff5f49cd47203e39c39f0ae1947bd5173cbdc0d143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2npxc" Nov 4 23:54:51.805853 kubelet[2830]: E1104 23:54:51.805597 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-2npxc_kube-system(26c41d20-6d80-4a04-a264-3af62c30ea8d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-2npxc_kube-system(26c41d20-6d80-4a04-a264-3af62c30ea8d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"872ee1346f60485230059dff5f49cd47203e39c39f0ae1947bd5173cbdc0d143\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-2npxc" podUID="26c41d20-6d80-4a04-a264-3af62c30ea8d" Nov 4 23:54:51.811759 containerd[1598]: time="2025-11-04T23:54:51.811649361Z" level=error msg="Failed to destroy network for sandbox \"2d6b7310a662778871ff57e3c06867f5c512d307af67608643c5362d2dd32e31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:54:51.818023 containerd[1598]: time="2025-11-04T23:54:51.817966159Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-686857b795-6rtmr,Uid:d9398051-b480-403c-90d7-54aa5426da90,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d6b7310a662778871ff57e3c06867f5c512d307af67608643c5362d2dd32e31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:54:51.819475 kubelet[2830]: E1104 23:54:51.819425 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d6b7310a662778871ff57e3c06867f5c512d307af67608643c5362d2dd32e31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:54:51.819605 kubelet[2830]: E1104 23:54:51.819512 2830 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d6b7310a662778871ff57e3c06867f5c512d307af67608643c5362d2dd32e31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-686857b795-6rtmr" Nov 4 23:54:51.819605 kubelet[2830]: E1104 23:54:51.819541 2830 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d6b7310a662778871ff57e3c06867f5c512d307af67608643c5362d2dd32e31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-686857b795-6rtmr" Nov 4 23:54:51.820701 kubelet[2830]: E1104 23:54:51.819649 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-686857b795-6rtmr_calico-system(d9398051-b480-403c-90d7-54aa5426da90)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-686857b795-6rtmr_calico-system(d9398051-b480-403c-90d7-54aa5426da90)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d6b7310a662778871ff57e3c06867f5c512d307af67608643c5362d2dd32e31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-686857b795-6rtmr" podUID="d9398051-b480-403c-90d7-54aa5426da90" Nov 4 23:54:51.854112 containerd[1598]: time="2025-11-04T23:54:51.854005902Z" level=error msg="Failed to destroy network for sandbox \"54f5ea43b6c43b9348cb66a57b458a6d6842aa796716a792cb7956dfaadcd1d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:54:51.865745 containerd[1598]: time="2025-11-04T23:54:51.862467140Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-577ff57f97-lhcd2,Uid:6addfd64-c562-4f9f-bb9f-581ad89a73d8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"54f5ea43b6c43b9348cb66a57b458a6d6842aa796716a792cb7956dfaadcd1d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:54:51.869088 containerd[1598]: time="2025-11-04T23:54:51.866403274Z" level=error msg="Failed to destroy network for sandbox \"e949dde9d7858926ee7e6f6ca993f534f1f04022e09012f232eefd1b6dc4d8a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:54:51.872341 containerd[1598]: time="2025-11-04T23:54:51.872234580Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-hx5ms,Uid:24a3a22d-e704-4f02-8408-ca1de6f232f0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e949dde9d7858926ee7e6f6ca993f534f1f04022e09012f232eefd1b6dc4d8a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:54:51.874695 kubelet[2830]: E1104 23:54:51.874414 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54f5ea43b6c43b9348cb66a57b458a6d6842aa796716a792cb7956dfaadcd1d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:54:51.875929 kubelet[2830]: E1104 23:54:51.874305 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e949dde9d7858926ee7e6f6ca993f534f1f04022e09012f232eefd1b6dc4d8a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:54:51.875929 kubelet[2830]: E1104 23:54:51.874766 2830 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e949dde9d7858926ee7e6f6ca993f534f1f04022e09012f232eefd1b6dc4d8a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-hx5ms" Nov 4 23:54:51.875929 kubelet[2830]: E1104 23:54:51.874800 2830 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e949dde9d7858926ee7e6f6ca993f534f1f04022e09012f232eefd1b6dc4d8a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-hx5ms" Nov 4 23:54:51.876332 kubelet[2830]: E1104 23:54:51.874871 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-hx5ms_calico-system(24a3a22d-e704-4f02-8408-ca1de6f232f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-hx5ms_calico-system(24a3a22d-e704-4f02-8408-ca1de6f232f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e949dde9d7858926ee7e6f6ca993f534f1f04022e09012f232eefd1b6dc4d8a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-hx5ms" podUID="24a3a22d-e704-4f02-8408-ca1de6f232f0" Nov 4 23:54:51.876332 kubelet[2830]: E1104 23:54:51.874579 2830 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54f5ea43b6c43b9348cb66a57b458a6d6842aa796716a792cb7956dfaadcd1d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-577ff57f97-lhcd2" Nov 4 23:54:51.876332 kubelet[2830]: E1104 23:54:51.875681 2830 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54f5ea43b6c43b9348cb66a57b458a6d6842aa796716a792cb7956dfaadcd1d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-577ff57f97-lhcd2" Nov 4 23:54:51.876467 kubelet[2830]: E1104 23:54:51.875788 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-577ff57f97-lhcd2_calico-apiserver(6addfd64-c562-4f9f-bb9f-581ad89a73d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-577ff57f97-lhcd2_calico-apiserver(6addfd64-c562-4f9f-bb9f-581ad89a73d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54f5ea43b6c43b9348cb66a57b458a6d6842aa796716a792cb7956dfaadcd1d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-577ff57f97-lhcd2" podUID="6addfd64-c562-4f9f-bb9f-581ad89a73d8" Nov 4 23:54:51.880250 kubelet[2830]: E1104 23:54:51.880207 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:51.889508 containerd[1598]: time="2025-11-04T23:54:51.889467684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 4 23:54:51.901435 containerd[1598]: time="2025-11-04T23:54:51.899703944Z" level=error msg="Failed to destroy network for sandbox \"eb681929a0893dfc7e8e1a00e36c40ba8d3115e04197b39ba8a87f0c2578ee8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:54:51.903919 containerd[1598]: time="2025-11-04T23:54:51.903711448Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-577ff57f97-8frfn,Uid:72fd9e54-497f-4204-80c1-9f81d06cb75e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb681929a0893dfc7e8e1a00e36c40ba8d3115e04197b39ba8a87f0c2578ee8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:54:51.904113 kubelet[2830]: E1104 23:54:51.904040 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb681929a0893dfc7e8e1a00e36c40ba8d3115e04197b39ba8a87f0c2578ee8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:54:51.904202 kubelet[2830]: E1104 23:54:51.904109 2830 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb681929a0893dfc7e8e1a00e36c40ba8d3115e04197b39ba8a87f0c2578ee8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-577ff57f97-8frfn" Nov 4 23:54:51.904202 kubelet[2830]: E1104 23:54:51.904140 2830 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb681929a0893dfc7e8e1a00e36c40ba8d3115e04197b39ba8a87f0c2578ee8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-577ff57f97-8frfn" Nov 4 23:54:51.904328 kubelet[2830]: E1104 23:54:51.904215 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-577ff57f97-8frfn_calico-apiserver(72fd9e54-497f-4204-80c1-9f81d06cb75e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-577ff57f97-8frfn_calico-apiserver(72fd9e54-497f-4204-80c1-9f81d06cb75e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb681929a0893dfc7e8e1a00e36c40ba8d3115e04197b39ba8a87f0c2578ee8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-577ff57f97-8frfn" podUID="72fd9e54-497f-4204-80c1-9f81d06cb75e" Nov 4 23:54:51.906042 containerd[1598]: time="2025-11-04T23:54:51.905885865Z" level=error msg="Failed to destroy network for sandbox \"7be2dcd1ae140a81ff069afe2703663bde83e9fe72973a0d584e054a018a7849\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:54:51.907531 containerd[1598]: time="2025-11-04T23:54:51.907472135Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fsz2q,Uid:ed949e47-9a3d-4f40-9a33-857a53b3cf70,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7be2dcd1ae140a81ff069afe2703663bde83e9fe72973a0d584e054a018a7849\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:54:51.908339 kubelet[2830]: E1104 23:54:51.908284 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7be2dcd1ae140a81ff069afe2703663bde83e9fe72973a0d584e054a018a7849\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:54:51.908471 kubelet[2830]: E1104 23:54:51.908361 2830 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7be2dcd1ae140a81ff069afe2703663bde83e9fe72973a0d584e054a018a7849\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-fsz2q" Nov 4 23:54:51.908471 kubelet[2830]: E1104 23:54:51.908421 2830 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7be2dcd1ae140a81ff069afe2703663bde83e9fe72973a0d584e054a018a7849\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-fsz2q" Nov 4 23:54:51.908579 kubelet[2830]: E1104 23:54:51.908515 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-fsz2q_kube-system(ed949e47-9a3d-4f40-9a33-857a53b3cf70)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-fsz2q_kube-system(ed949e47-9a3d-4f40-9a33-857a53b3cf70)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7be2dcd1ae140a81ff069afe2703663bde83e9fe72973a0d584e054a018a7849\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-fsz2q" podUID="ed949e47-9a3d-4f40-9a33-857a53b3cf70" Nov 4 23:54:51.976835 containerd[1598]: time="2025-11-04T23:54:51.976755437Z" level=error msg="Failed to destroy network for sandbox \"527e5c2c3e95660e33ad42ebef10c9ba674107b27d293985758df9a8061d71ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:54:51.977979 containerd[1598]: time="2025-11-04T23:54:51.977930275Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2h4kv,Uid:907a3d1f-a9d8-4fa7-9529-2703403b5056,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"527e5c2c3e95660e33ad42ebef10c9ba674107b27d293985758df9a8061d71ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:54:51.978326 kubelet[2830]: E1104 23:54:51.978277 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"527e5c2c3e95660e33ad42ebef10c9ba674107b27d293985758df9a8061d71ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:54:51.978412 kubelet[2830]: E1104 23:54:51.978366 2830 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"527e5c2c3e95660e33ad42ebef10c9ba674107b27d293985758df9a8061d71ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2h4kv" Nov 4 23:54:51.978475 kubelet[2830]: E1104 23:54:51.978452 2830 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"527e5c2c3e95660e33ad42ebef10c9ba674107b27d293985758df9a8061d71ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2h4kv" Nov 4 23:54:51.978608 kubelet[2830]: E1104 23:54:51.978570 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2h4kv_calico-system(907a3d1f-a9d8-4fa7-9529-2703403b5056)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2h4kv_calico-system(907a3d1f-a9d8-4fa7-9529-2703403b5056)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"527e5c2c3e95660e33ad42ebef10c9ba674107b27d293985758df9a8061d71ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2h4kv" podUID="907a3d1f-a9d8-4fa7-9529-2703403b5056" Nov 4 23:54:57.811211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1033891009.mount: Deactivated successfully. Nov 4 23:54:58.013814 containerd[1598]: time="2025-11-04T23:54:58.013741142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 4 23:54:58.072794 containerd[1598]: time="2025-11-04T23:54:58.072544430Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.182792815s" Nov 4 23:54:58.072794 containerd[1598]: time="2025-11-04T23:54:58.072592807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 4 23:54:58.077595 containerd[1598]: time="2025-11-04T23:54:58.077485457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:58.099433 containerd[1598]: time="2025-11-04T23:54:58.099302004Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:58.100060 containerd[1598]: time="2025-11-04T23:54:58.099960082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:58.140483 containerd[1598]: time="2025-11-04T23:54:58.140414430Z" level=info msg="CreateContainer within sandbox \"7a0d1ccc5132c9a10b960d9e18739b3b1e53c223d91b4bf98e01e543703836f2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 4 23:54:58.281192 containerd[1598]: time="2025-11-04T23:54:58.280916232Z" level=info msg="Container 1469d86dbea8af5395702340570ad22a88bd77f9275b86366231ca08705f452c: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:58.282040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount922187836.mount: Deactivated successfully. Nov 4 23:54:58.331321 containerd[1598]: time="2025-11-04T23:54:58.331092730Z" level=info msg="CreateContainer within sandbox \"7a0d1ccc5132c9a10b960d9e18739b3b1e53c223d91b4bf98e01e543703836f2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1469d86dbea8af5395702340570ad22a88bd77f9275b86366231ca08705f452c\"" Nov 4 23:54:58.333343 containerd[1598]: time="2025-11-04T23:54:58.333100503Z" level=info msg="StartContainer for \"1469d86dbea8af5395702340570ad22a88bd77f9275b86366231ca08705f452c\"" Nov 4 23:54:58.343204 containerd[1598]: time="2025-11-04T23:54:58.343089363Z" level=info msg="connecting to shim 1469d86dbea8af5395702340570ad22a88bd77f9275b86366231ca08705f452c" address="unix:///run/containerd/s/539f8c6abb6cf16631aa9c77583c23edf03fbf9853ec1aebb44f3b3432d57d15" protocol=ttrpc version=3 Nov 4 23:54:58.473007 systemd[1]: Started cri-containerd-1469d86dbea8af5395702340570ad22a88bd77f9275b86366231ca08705f452c.scope - libcontainer container 1469d86dbea8af5395702340570ad22a88bd77f9275b86366231ca08705f452c. Nov 4 23:54:58.571788 containerd[1598]: time="2025-11-04T23:54:58.571712715Z" level=info msg="StartContainer for \"1469d86dbea8af5395702340570ad22a88bd77f9275b86366231ca08705f452c\" returns successfully" Nov 4 23:54:58.680581 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 4 23:54:58.681949 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 4 23:54:58.921582 kubelet[2830]: E1104 23:54:58.921479 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:58.960802 kubelet[2830]: I1104 23:54:58.957498 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-v9r9l" podStartSLOduration=2.037589905 podStartE2EDuration="17.957471564s" podCreationTimestamp="2025-11-04 23:54:41 +0000 UTC" firstStartedPulling="2025-11-04 23:54:42.179945272 +0000 UTC m=+23.759419646" lastFinishedPulling="2025-11-04 23:54:58.099826918 +0000 UTC m=+39.679301305" observedRunningTime="2025-11-04 23:54:58.952166953 +0000 UTC m=+40.531641348" watchObservedRunningTime="2025-11-04 23:54:58.957471564 +0000 UTC m=+40.536945960" Nov 4 23:54:59.064970 kubelet[2830]: I1104 23:54:59.063862 2830 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9398051-b480-403c-90d7-54aa5426da90-whisker-ca-bundle\") pod \"d9398051-b480-403c-90d7-54aa5426da90\" (UID: \"d9398051-b480-403c-90d7-54aa5426da90\") " Nov 4 23:54:59.064970 kubelet[2830]: I1104 23:54:59.063914 2830 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d9398051-b480-403c-90d7-54aa5426da90-whisker-backend-key-pair\") pod \"d9398051-b480-403c-90d7-54aa5426da90\" (UID: \"d9398051-b480-403c-90d7-54aa5426da90\") " Nov 4 23:54:59.064970 kubelet[2830]: I1104 23:54:59.063946 2830 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njxb4\" (UniqueName: \"kubernetes.io/projected/d9398051-b480-403c-90d7-54aa5426da90-kube-api-access-njxb4\") pod \"d9398051-b480-403c-90d7-54aa5426da90\" (UID: \"d9398051-b480-403c-90d7-54aa5426da90\") " Nov 4 23:54:59.066881 kubelet[2830]: I1104 23:54:59.065705 2830 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9398051-b480-403c-90d7-54aa5426da90-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d9398051-b480-403c-90d7-54aa5426da90" (UID: "d9398051-b480-403c-90d7-54aa5426da90"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 23:54:59.104099 systemd[1]: var-lib-kubelet-pods-d9398051\x2db480\x2d403c\x2d90d7\x2d54aa5426da90-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnjxb4.mount: Deactivated successfully. Nov 4 23:54:59.111914 kubelet[2830]: I1104 23:54:59.111854 2830 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9398051-b480-403c-90d7-54aa5426da90-kube-api-access-njxb4" (OuterVolumeSpecName: "kube-api-access-njxb4") pod "d9398051-b480-403c-90d7-54aa5426da90" (UID: "d9398051-b480-403c-90d7-54aa5426da90"). InnerVolumeSpecName "kube-api-access-njxb4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 23:54:59.117652 systemd[1]: var-lib-kubelet-pods-d9398051\x2db480\x2d403c\x2d90d7\x2d54aa5426da90-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 4 23:54:59.117952 kubelet[2830]: I1104 23:54:59.117588 2830 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9398051-b480-403c-90d7-54aa5426da90-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d9398051-b480-403c-90d7-54aa5426da90" (UID: "d9398051-b480-403c-90d7-54aa5426da90"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 4 23:54:59.164702 kubelet[2830]: I1104 23:54:59.164629 2830 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d9398051-b480-403c-90d7-54aa5426da90-whisker-backend-key-pair\") on node \"ci-4487.0.0-n-b9f348caa0\" DevicePath \"\"" Nov 4 23:54:59.166266 kubelet[2830]: I1104 23:54:59.166225 2830 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-njxb4\" (UniqueName: \"kubernetes.io/projected/d9398051-b480-403c-90d7-54aa5426da90-kube-api-access-njxb4\") on node \"ci-4487.0.0-n-b9f348caa0\" DevicePath \"\"" Nov 4 23:54:59.166528 kubelet[2830]: I1104 23:54:59.166509 2830 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9398051-b480-403c-90d7-54aa5426da90-whisker-ca-bundle\") on node \"ci-4487.0.0-n-b9f348caa0\" DevicePath \"\"" Nov 4 23:54:59.249600 containerd[1598]: time="2025-11-04T23:54:59.249335017Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1469d86dbea8af5395702340570ad22a88bd77f9275b86366231ca08705f452c\" id:\"29c67efe45e64c04712fc484e2fce9be94429907b74d5ddc08423f3b37a80fd0\" pid:3927 exit_status:1 exited_at:{seconds:1762300499 nanos:240003322}" Nov 4 23:54:59.926092 kubelet[2830]: E1104 23:54:59.925449 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:54:59.935938 systemd[1]: Removed slice kubepods-besteffort-podd9398051_b480_403c_90d7_54aa5426da90.slice - libcontainer container kubepods-besteffort-podd9398051_b480_403c_90d7_54aa5426da90.slice. Nov 4 23:55:00.154307 systemd[1]: Created slice kubepods-besteffort-podb6a0e661_70b0_458b_8d61_43e16ce05a61.slice - libcontainer container kubepods-besteffort-podb6a0e661_70b0_458b_8d61_43e16ce05a61.slice. Nov 4 23:55:00.199315 kubelet[2830]: I1104 23:55:00.199067 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c67d\" (UniqueName: \"kubernetes.io/projected/b6a0e661-70b0-458b-8d61-43e16ce05a61-kube-api-access-4c67d\") pod \"whisker-9bcffb49f-rncd9\" (UID: \"b6a0e661-70b0-458b-8d61-43e16ce05a61\") " pod="calico-system/whisker-9bcffb49f-rncd9" Nov 4 23:55:00.199989 kubelet[2830]: I1104 23:55:00.199828 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b6a0e661-70b0-458b-8d61-43e16ce05a61-whisker-backend-key-pair\") pod \"whisker-9bcffb49f-rncd9\" (UID: \"b6a0e661-70b0-458b-8d61-43e16ce05a61\") " pod="calico-system/whisker-9bcffb49f-rncd9" Nov 4 23:55:00.200576 kubelet[2830]: I1104 23:55:00.200090 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6a0e661-70b0-458b-8d61-43e16ce05a61-whisker-ca-bundle\") pod \"whisker-9bcffb49f-rncd9\" (UID: \"b6a0e661-70b0-458b-8d61-43e16ce05a61\") " pod="calico-system/whisker-9bcffb49f-rncd9" Nov 4 23:55:00.222138 containerd[1598]: time="2025-11-04T23:55:00.222077793Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1469d86dbea8af5395702340570ad22a88bd77f9275b86366231ca08705f452c\" id:\"91e63186d4cf3801d5d9242637a5a8792e48ad1c429c08209cc85b14dadf09bb\" pid:3964 exit_status:1 exited_at:{seconds:1762300500 nanos:221387509}" Nov 4 23:55:00.468192 containerd[1598]: time="2025-11-04T23:55:00.468049787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9bcffb49f-rncd9,Uid:b6a0e661-70b0-458b-8d61-43e16ce05a61,Namespace:calico-system,Attempt:0,}" Nov 4 23:55:00.648998 kubelet[2830]: I1104 23:55:00.648608 2830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9398051-b480-403c-90d7-54aa5426da90" path="/var/lib/kubelet/pods/d9398051-b480-403c-90d7-54aa5426da90/volumes" Nov 4 23:55:00.926069 systemd-networkd[1493]: cali28e151b67fc: Link UP Nov 4 23:55:00.926285 systemd-networkd[1493]: cali28e151b67fc: Gained carrier Nov 4 23:55:00.970681 containerd[1598]: 2025-11-04 23:55:00.561 [INFO][3995] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 23:55:00.970681 containerd[1598]: 2025-11-04 23:55:00.633 [INFO][3995] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--n--b9f348caa0-k8s-whisker--9bcffb49f--rncd9-eth0 whisker-9bcffb49f- calico-system b6a0e661-70b0-458b-8d61-43e16ce05a61 956 0 2025-11-04 23:55:00 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:9bcffb49f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4487.0.0-n-b9f348caa0 whisker-9bcffb49f-rncd9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali28e151b67fc [] [] }} ContainerID="e6c73b17b4a015dd2fb477dac2a0346a7c156508d29e13088a17b7d5c0934f0b" Namespace="calico-system" Pod="whisker-9bcffb49f-rncd9" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-whisker--9bcffb49f--rncd9-" Nov 4 23:55:00.970681 containerd[1598]: 2025-11-04 23:55:00.633 [INFO][3995] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e6c73b17b4a015dd2fb477dac2a0346a7c156508d29e13088a17b7d5c0934f0b" Namespace="calico-system" Pod="whisker-9bcffb49f-rncd9" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-whisker--9bcffb49f--rncd9-eth0" Nov 4 23:55:00.970681 containerd[1598]: 2025-11-04 23:55:00.831 [INFO][4059] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6c73b17b4a015dd2fb477dac2a0346a7c156508d29e13088a17b7d5c0934f0b" HandleID="k8s-pod-network.e6c73b17b4a015dd2fb477dac2a0346a7c156508d29e13088a17b7d5c0934f0b" Workload="ci--4487.0.0--n--b9f348caa0-k8s-whisker--9bcffb49f--rncd9-eth0" Nov 4 23:55:00.972909 containerd[1598]: 2025-11-04 23:55:00.834 [INFO][4059] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e6c73b17b4a015dd2fb477dac2a0346a7c156508d29e13088a17b7d5c0934f0b" HandleID="k8s-pod-network.e6c73b17b4a015dd2fb477dac2a0346a7c156508d29e13088a17b7d5c0934f0b" Workload="ci--4487.0.0--n--b9f348caa0-k8s-whisker--9bcffb49f--rncd9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000331690), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.0-n-b9f348caa0", "pod":"whisker-9bcffb49f-rncd9", "timestamp":"2025-11-04 23:55:00.831127447 +0000 UTC"}, Hostname:"ci-4487.0.0-n-b9f348caa0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:55:00.972909 containerd[1598]: 2025-11-04 23:55:00.834 [INFO][4059] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:55:00.972909 containerd[1598]: 2025-11-04 23:55:00.835 [INFO][4059] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:55:00.972909 containerd[1598]: 2025-11-04 23:55:00.836 [INFO][4059] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-n-b9f348caa0' Nov 4 23:55:00.972909 containerd[1598]: 2025-11-04 23:55:00.853 [INFO][4059] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e6c73b17b4a015dd2fb477dac2a0346a7c156508d29e13088a17b7d5c0934f0b" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:00.972909 containerd[1598]: 2025-11-04 23:55:00.868 [INFO][4059] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:00.972909 containerd[1598]: 2025-11-04 23:55:00.877 [INFO][4059] ipam/ipam.go 511: Trying affinity for 192.168.34.64/26 host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:00.972909 containerd[1598]: 2025-11-04 23:55:00.880 [INFO][4059] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.64/26 host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:00.972909 containerd[1598]: 2025-11-04 23:55:00.883 [INFO][4059] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:00.973208 containerd[1598]: 2025-11-04 23:55:00.883 [INFO][4059] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.e6c73b17b4a015dd2fb477dac2a0346a7c156508d29e13088a17b7d5c0934f0b" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:00.973208 containerd[1598]: 2025-11-04 23:55:00.885 [INFO][4059] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e6c73b17b4a015dd2fb477dac2a0346a7c156508d29e13088a17b7d5c0934f0b Nov 4 23:55:00.973208 containerd[1598]: 2025-11-04 23:55:00.891 [INFO][4059] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.e6c73b17b4a015dd2fb477dac2a0346a7c156508d29e13088a17b7d5c0934f0b" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:00.973208 containerd[1598]: 2025-11-04 23:55:00.900 [INFO][4059] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.65/26] block=192.168.34.64/26 handle="k8s-pod-network.e6c73b17b4a015dd2fb477dac2a0346a7c156508d29e13088a17b7d5c0934f0b" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:00.973208 containerd[1598]: 2025-11-04 23:55:00.900 [INFO][4059] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.65/26] handle="k8s-pod-network.e6c73b17b4a015dd2fb477dac2a0346a7c156508d29e13088a17b7d5c0934f0b" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:00.973208 containerd[1598]: 2025-11-04 23:55:00.900 [INFO][4059] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:55:00.973208 containerd[1598]: 2025-11-04 23:55:00.900 [INFO][4059] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.65/26] IPv6=[] ContainerID="e6c73b17b4a015dd2fb477dac2a0346a7c156508d29e13088a17b7d5c0934f0b" HandleID="k8s-pod-network.e6c73b17b4a015dd2fb477dac2a0346a7c156508d29e13088a17b7d5c0934f0b" Workload="ci--4487.0.0--n--b9f348caa0-k8s-whisker--9bcffb49f--rncd9-eth0" Nov 4 23:55:00.973434 containerd[1598]: 2025-11-04 23:55:00.904 [INFO][3995] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e6c73b17b4a015dd2fb477dac2a0346a7c156508d29e13088a17b7d5c0934f0b" Namespace="calico-system" Pod="whisker-9bcffb49f-rncd9" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-whisker--9bcffb49f--rncd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--b9f348caa0-k8s-whisker--9bcffb49f--rncd9-eth0", GenerateName:"whisker-9bcffb49f-", Namespace:"calico-system", SelfLink:"", UID:"b6a0e661-70b0-458b-8d61-43e16ce05a61", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 55, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9bcffb49f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-b9f348caa0", ContainerID:"", Pod:"whisker-9bcffb49f-rncd9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.34.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali28e151b67fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:55:00.973434 containerd[1598]: 2025-11-04 23:55:00.904 [INFO][3995] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.65/32] ContainerID="e6c73b17b4a015dd2fb477dac2a0346a7c156508d29e13088a17b7d5c0934f0b" Namespace="calico-system" Pod="whisker-9bcffb49f-rncd9" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-whisker--9bcffb49f--rncd9-eth0" Nov 4 23:55:00.973536 containerd[1598]: 2025-11-04 23:55:00.904 [INFO][3995] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28e151b67fc ContainerID="e6c73b17b4a015dd2fb477dac2a0346a7c156508d29e13088a17b7d5c0934f0b" Namespace="calico-system" Pod="whisker-9bcffb49f-rncd9" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-whisker--9bcffb49f--rncd9-eth0" Nov 4 23:55:00.973536 containerd[1598]: 2025-11-04 23:55:00.921 [INFO][3995] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6c73b17b4a015dd2fb477dac2a0346a7c156508d29e13088a17b7d5c0934f0b" Namespace="calico-system" Pod="whisker-9bcffb49f-rncd9" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-whisker--9bcffb49f--rncd9-eth0" Nov 4 23:55:00.973594 containerd[1598]: 2025-11-04 23:55:00.921 [INFO][3995] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e6c73b17b4a015dd2fb477dac2a0346a7c156508d29e13088a17b7d5c0934f0b" Namespace="calico-system" Pod="whisker-9bcffb49f-rncd9" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-whisker--9bcffb49f--rncd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--b9f348caa0-k8s-whisker--9bcffb49f--rncd9-eth0", GenerateName:"whisker-9bcffb49f-", Namespace:"calico-system", SelfLink:"", UID:"b6a0e661-70b0-458b-8d61-43e16ce05a61", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 55, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9bcffb49f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-b9f348caa0", ContainerID:"e6c73b17b4a015dd2fb477dac2a0346a7c156508d29e13088a17b7d5c0934f0b", Pod:"whisker-9bcffb49f-rncd9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.34.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali28e151b67fc", MAC:"7e:9b:65:a9:b0:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:55:00.979538 containerd[1598]: 2025-11-04 23:55:00.955 [INFO][3995] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e6c73b17b4a015dd2fb477dac2a0346a7c156508d29e13088a17b7d5c0934f0b" Namespace="calico-system" Pod="whisker-9bcffb49f-rncd9" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-whisker--9bcffb49f--rncd9-eth0" Nov 4 23:55:01.259351 containerd[1598]: time="2025-11-04T23:55:01.258946037Z" level=info msg="connecting to shim e6c73b17b4a015dd2fb477dac2a0346a7c156508d29e13088a17b7d5c0934f0b" address="unix:///run/containerd/s/ea2ccd189a3178b3ea4367c2dbfec436848ac1c066f7feae09379b7265b0cee8" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:01.309049 systemd[1]: Started cri-containerd-e6c73b17b4a015dd2fb477dac2a0346a7c156508d29e13088a17b7d5c0934f0b.scope - libcontainer container e6c73b17b4a015dd2fb477dac2a0346a7c156508d29e13088a17b7d5c0934f0b. Nov 4 23:55:01.416297 containerd[1598]: time="2025-11-04T23:55:01.416230154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9bcffb49f-rncd9,Uid:b6a0e661-70b0-458b-8d61-43e16ce05a61,Namespace:calico-system,Attempt:0,} returns sandbox id \"e6c73b17b4a015dd2fb477dac2a0346a7c156508d29e13088a17b7d5c0934f0b\"" Nov 4 23:55:01.429010 containerd[1598]: time="2025-11-04T23:55:01.428845802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 23:55:01.862744 containerd[1598]: time="2025-11-04T23:55:01.862348578Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:01.878816 containerd[1598]: time="2025-11-04T23:55:01.864756590Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 23:55:01.878816 containerd[1598]: time="2025-11-04T23:55:01.866888702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 4 23:55:01.879107 kubelet[2830]: E1104 23:55:01.872164 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:55:01.879107 kubelet[2830]: E1104 23:55:01.872256 2830 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:55:01.879107 kubelet[2830]: E1104 23:55:01.872420 2830 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-9bcffb49f-rncd9_calico-system(b6a0e661-70b0-458b-8d61-43e16ce05a61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:01.881397 containerd[1598]: time="2025-11-04T23:55:01.878180947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 23:55:02.233561 containerd[1598]: time="2025-11-04T23:55:02.232916521Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:02.236158 containerd[1598]: time="2025-11-04T23:55:02.235391273Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 23:55:02.236158 containerd[1598]: time="2025-11-04T23:55:02.235487249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 4 23:55:02.236857 kubelet[2830]: E1104 23:55:02.236582 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:55:02.236857 kubelet[2830]: E1104 23:55:02.236680 2830 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:55:02.236857 kubelet[2830]: E1104 23:55:02.236785 2830 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-9bcffb49f-rncd9_calico-system(b6a0e661-70b0-458b-8d61-43e16ce05a61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:02.237059 kubelet[2830]: E1104 23:55:02.236887 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9bcffb49f-rncd9" podUID="b6a0e661-70b0-458b-8d61-43e16ce05a61" Nov 4 23:55:02.637368 kubelet[2830]: E1104 23:55:02.636997 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:55:02.638697 containerd[1598]: time="2025-11-04T23:55:02.638489753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fsz2q,Uid:ed949e47-9a3d-4f40-9a33-857a53b3cf70,Namespace:kube-system,Attempt:0,}" Nov 4 23:55:02.648361 kubelet[2830]: E1104 23:55:02.647370 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:55:02.648615 containerd[1598]: time="2025-11-04T23:55:02.648126081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2npxc,Uid:26c41d20-6d80-4a04-a264-3af62c30ea8d,Namespace:kube-system,Attempt:0,}" Nov 4 23:55:02.730917 systemd-networkd[1493]: cali28e151b67fc: Gained IPv6LL Nov 4 23:55:02.937112 systemd-networkd[1493]: cali8b6a7bc900a: Link UP Nov 4 23:55:02.942405 systemd-networkd[1493]: cali8b6a7bc900a: Gained carrier Nov 4 23:55:02.968858 kubelet[2830]: E1104 23:55:02.968520 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9bcffb49f-rncd9" podUID="b6a0e661-70b0-458b-8d61-43e16ce05a61" Nov 4 23:55:02.992727 containerd[1598]: 2025-11-04 23:55:02.740 [INFO][4167] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 23:55:02.992727 containerd[1598]: 2025-11-04 23:55:02.769 [INFO][4167] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--n--b9f348caa0-k8s-coredns--66bc5c9577--2npxc-eth0 coredns-66bc5c9577- kube-system 26c41d20-6d80-4a04-a264-3af62c30ea8d 869 0 2025-11-04 23:54:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4487.0.0-n-b9f348caa0 coredns-66bc5c9577-2npxc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8b6a7bc900a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="04c1f4bab157eeed8d1b8144600ae1e1c783afb73bd39139a61df5977115608b" Namespace="kube-system" Pod="coredns-66bc5c9577-2npxc" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-coredns--66bc5c9577--2npxc-" Nov 4 23:55:02.992727 containerd[1598]: 2025-11-04 23:55:02.770 [INFO][4167] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="04c1f4bab157eeed8d1b8144600ae1e1c783afb73bd39139a61df5977115608b" Namespace="kube-system" Pod="coredns-66bc5c9577-2npxc" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-coredns--66bc5c9577--2npxc-eth0" Nov 4 23:55:02.992727 containerd[1598]: 2025-11-04 23:55:02.849 [INFO][4190] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04c1f4bab157eeed8d1b8144600ae1e1c783afb73bd39139a61df5977115608b" HandleID="k8s-pod-network.04c1f4bab157eeed8d1b8144600ae1e1c783afb73bd39139a61df5977115608b" Workload="ci--4487.0.0--n--b9f348caa0-k8s-coredns--66bc5c9577--2npxc-eth0" Nov 4 23:55:02.993646 containerd[1598]: 2025-11-04 23:55:02.850 [INFO][4190] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="04c1f4bab157eeed8d1b8144600ae1e1c783afb73bd39139a61df5977115608b" HandleID="k8s-pod-network.04c1f4bab157eeed8d1b8144600ae1e1c783afb73bd39139a61df5977115608b" Workload="ci--4487.0.0--n--b9f348caa0-k8s-coredns--66bc5c9577--2npxc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5320), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4487.0.0-n-b9f348caa0", "pod":"coredns-66bc5c9577-2npxc", "timestamp":"2025-11-04 23:55:02.849742485 +0000 UTC"}, Hostname:"ci-4487.0.0-n-b9f348caa0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:55:02.993646 containerd[1598]: 2025-11-04 23:55:02.850 [INFO][4190] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:55:02.993646 containerd[1598]: 2025-11-04 23:55:02.850 [INFO][4190] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:55:02.993646 containerd[1598]: 2025-11-04 23:55:02.850 [INFO][4190] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-n-b9f348caa0' Nov 4 23:55:02.993646 containerd[1598]: 2025-11-04 23:55:02.868 [INFO][4190] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.04c1f4bab157eeed8d1b8144600ae1e1c783afb73bd39139a61df5977115608b" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:02.993646 containerd[1598]: 2025-11-04 23:55:02.878 [INFO][4190] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:02.993646 containerd[1598]: 2025-11-04 23:55:02.889 [INFO][4190] ipam/ipam.go 511: Trying affinity for 192.168.34.64/26 host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:02.993646 containerd[1598]: 2025-11-04 23:55:02.893 [INFO][4190] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.64/26 host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:02.993646 containerd[1598]: 2025-11-04 23:55:02.898 [INFO][4190] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:02.994275 containerd[1598]: 2025-11-04 23:55:02.898 [INFO][4190] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.04c1f4bab157eeed8d1b8144600ae1e1c783afb73bd39139a61df5977115608b" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:02.994275 containerd[1598]: 2025-11-04 23:55:02.903 [INFO][4190] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.04c1f4bab157eeed8d1b8144600ae1e1c783afb73bd39139a61df5977115608b Nov 4 23:55:02.994275 containerd[1598]: 2025-11-04 23:55:02.910 [INFO][4190] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.04c1f4bab157eeed8d1b8144600ae1e1c783afb73bd39139a61df5977115608b" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:02.994275 containerd[1598]: 2025-11-04 23:55:02.919 [INFO][4190] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.66/26] block=192.168.34.64/26 handle="k8s-pod-network.04c1f4bab157eeed8d1b8144600ae1e1c783afb73bd39139a61df5977115608b" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:02.994275 containerd[1598]: 2025-11-04 23:55:02.920 [INFO][4190] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.66/26] handle="k8s-pod-network.04c1f4bab157eeed8d1b8144600ae1e1c783afb73bd39139a61df5977115608b" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:02.994275 containerd[1598]: 2025-11-04 23:55:02.920 [INFO][4190] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:55:02.994275 containerd[1598]: 2025-11-04 23:55:02.920 [INFO][4190] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.66/26] IPv6=[] ContainerID="04c1f4bab157eeed8d1b8144600ae1e1c783afb73bd39139a61df5977115608b" HandleID="k8s-pod-network.04c1f4bab157eeed8d1b8144600ae1e1c783afb73bd39139a61df5977115608b" Workload="ci--4487.0.0--n--b9f348caa0-k8s-coredns--66bc5c9577--2npxc-eth0" Nov 4 23:55:02.994602 containerd[1598]: 2025-11-04 23:55:02.928 [INFO][4167] cni-plugin/k8s.go 418: Populated endpoint ContainerID="04c1f4bab157eeed8d1b8144600ae1e1c783afb73bd39139a61df5977115608b" Namespace="kube-system" Pod="coredns-66bc5c9577-2npxc" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-coredns--66bc5c9577--2npxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--b9f348caa0-k8s-coredns--66bc5c9577--2npxc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"26c41d20-6d80-4a04-a264-3af62c30ea8d", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 54, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-b9f348caa0", ContainerID:"", Pod:"coredns-66bc5c9577-2npxc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8b6a7bc900a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:55:02.994602 containerd[1598]: 2025-11-04 23:55:02.928 [INFO][4167] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.66/32] ContainerID="04c1f4bab157eeed8d1b8144600ae1e1c783afb73bd39139a61df5977115608b" Namespace="kube-system" Pod="coredns-66bc5c9577-2npxc" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-coredns--66bc5c9577--2npxc-eth0" Nov 4 23:55:02.994602 containerd[1598]: 2025-11-04 23:55:02.929 [INFO][4167] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b6a7bc900a ContainerID="04c1f4bab157eeed8d1b8144600ae1e1c783afb73bd39139a61df5977115608b" Namespace="kube-system" Pod="coredns-66bc5c9577-2npxc" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-coredns--66bc5c9577--2npxc-eth0" Nov 4 23:55:02.994602 containerd[1598]: 2025-11-04 23:55:02.940 [INFO][4167] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="04c1f4bab157eeed8d1b8144600ae1e1c783afb73bd39139a61df5977115608b" Namespace="kube-system" Pod="coredns-66bc5c9577-2npxc" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-coredns--66bc5c9577--2npxc-eth0" Nov 4 23:55:02.994602 containerd[1598]: 2025-11-04 23:55:02.941 [INFO][4167] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="04c1f4bab157eeed8d1b8144600ae1e1c783afb73bd39139a61df5977115608b" Namespace="kube-system" Pod="coredns-66bc5c9577-2npxc" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-coredns--66bc5c9577--2npxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--b9f348caa0-k8s-coredns--66bc5c9577--2npxc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"26c41d20-6d80-4a04-a264-3af62c30ea8d", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 54, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-b9f348caa0", ContainerID:"04c1f4bab157eeed8d1b8144600ae1e1c783afb73bd39139a61df5977115608b", Pod:"coredns-66bc5c9577-2npxc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8b6a7bc900a", MAC:"02:7c:09:ef:f7:32", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:55:03.000176 containerd[1598]: 2025-11-04 23:55:02.979 [INFO][4167] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="04c1f4bab157eeed8d1b8144600ae1e1c783afb73bd39139a61df5977115608b" Namespace="kube-system" Pod="coredns-66bc5c9577-2npxc" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-coredns--66bc5c9577--2npxc-eth0" Nov 4 23:55:03.049926 containerd[1598]: time="2025-11-04T23:55:03.049825975Z" level=info msg="connecting to shim 04c1f4bab157eeed8d1b8144600ae1e1c783afb73bd39139a61df5977115608b" address="unix:///run/containerd/s/c75fed6c554743b439b2b8043edcb280e491612c97fdf0d450b400a9892242c8" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:03.122219 systemd[1]: Started cri-containerd-04c1f4bab157eeed8d1b8144600ae1e1c783afb73bd39139a61df5977115608b.scope - libcontainer container 04c1f4bab157eeed8d1b8144600ae1e1c783afb73bd39139a61df5977115608b. Nov 4 23:55:03.157843 systemd-networkd[1493]: calic68df1400ae: Link UP Nov 4 23:55:03.161640 systemd-networkd[1493]: calic68df1400ae: Gained carrier Nov 4 23:55:03.207579 containerd[1598]: 2025-11-04 23:55:02.751 [INFO][4166] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 23:55:03.207579 containerd[1598]: 2025-11-04 23:55:02.788 [INFO][4166] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--n--b9f348caa0-k8s-coredns--66bc5c9577--fsz2q-eth0 coredns-66bc5c9577- kube-system ed949e47-9a3d-4f40-9a33-857a53b3cf70 882 0 2025-11-04 23:54:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4487.0.0-n-b9f348caa0 coredns-66bc5c9577-fsz2q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic68df1400ae [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="09cb96dde7fa01b7015a8c0b345f4b347283ed324c7449affd93353c5d743828" Namespace="kube-system" Pod="coredns-66bc5c9577-fsz2q" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-coredns--66bc5c9577--fsz2q-" Nov 4 23:55:03.207579 containerd[1598]: 2025-11-04 23:55:02.789 [INFO][4166] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="09cb96dde7fa01b7015a8c0b345f4b347283ed324c7449affd93353c5d743828" Namespace="kube-system" Pod="coredns-66bc5c9577-fsz2q" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-coredns--66bc5c9577--fsz2q-eth0" Nov 4 23:55:03.207579 containerd[1598]: 2025-11-04 23:55:02.853 [INFO][4195] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09cb96dde7fa01b7015a8c0b345f4b347283ed324c7449affd93353c5d743828" HandleID="k8s-pod-network.09cb96dde7fa01b7015a8c0b345f4b347283ed324c7449affd93353c5d743828" Workload="ci--4487.0.0--n--b9f348caa0-k8s-coredns--66bc5c9577--fsz2q-eth0" Nov 4 23:55:03.207579 containerd[1598]: 2025-11-04 23:55:02.854 [INFO][4195] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="09cb96dde7fa01b7015a8c0b345f4b347283ed324c7449affd93353c5d743828" HandleID="k8s-pod-network.09cb96dde7fa01b7015a8c0b345f4b347283ed324c7449affd93353c5d743828" Workload="ci--4487.0.0--n--b9f348caa0-k8s-coredns--66bc5c9577--fsz2q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4487.0.0-n-b9f348caa0", "pod":"coredns-66bc5c9577-fsz2q", "timestamp":"2025-11-04 23:55:02.853449958 +0000 UTC"}, Hostname:"ci-4487.0.0-n-b9f348caa0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:55:03.207579 containerd[1598]: 2025-11-04 23:55:02.854 [INFO][4195] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:55:03.207579 containerd[1598]: 2025-11-04 23:55:02.920 [INFO][4195] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:55:03.207579 containerd[1598]: 2025-11-04 23:55:02.920 [INFO][4195] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-n-b9f348caa0' Nov 4 23:55:03.207579 containerd[1598]: 2025-11-04 23:55:02.976 [INFO][4195] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.09cb96dde7fa01b7015a8c0b345f4b347283ed324c7449affd93353c5d743828" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:03.207579 containerd[1598]: 2025-11-04 23:55:03.018 [INFO][4195] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:03.207579 containerd[1598]: 2025-11-04 23:55:03.038 [INFO][4195] ipam/ipam.go 511: Trying affinity for 192.168.34.64/26 host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:03.207579 containerd[1598]: 2025-11-04 23:55:03.056 [INFO][4195] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.64/26 host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:03.207579 containerd[1598]: 2025-11-04 23:55:03.066 [INFO][4195] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:03.207579 containerd[1598]: 2025-11-04 23:55:03.066 [INFO][4195] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.09cb96dde7fa01b7015a8c0b345f4b347283ed324c7449affd93353c5d743828" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:03.207579 containerd[1598]: 2025-11-04 23:55:03.075 [INFO][4195] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.09cb96dde7fa01b7015a8c0b345f4b347283ed324c7449affd93353c5d743828 Nov 4 23:55:03.207579 containerd[1598]: 2025-11-04 23:55:03.106 [INFO][4195] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.09cb96dde7fa01b7015a8c0b345f4b347283ed324c7449affd93353c5d743828" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:03.207579 containerd[1598]: 2025-11-04 23:55:03.138 [INFO][4195] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.67/26] block=192.168.34.64/26 handle="k8s-pod-network.09cb96dde7fa01b7015a8c0b345f4b347283ed324c7449affd93353c5d743828" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:03.207579 containerd[1598]: 2025-11-04 23:55:03.138 [INFO][4195] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.67/26] handle="k8s-pod-network.09cb96dde7fa01b7015a8c0b345f4b347283ed324c7449affd93353c5d743828" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:03.207579 containerd[1598]: 2025-11-04 23:55:03.138 [INFO][4195] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:55:03.207579 containerd[1598]: 2025-11-04 23:55:03.138 [INFO][4195] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.67/26] IPv6=[] ContainerID="09cb96dde7fa01b7015a8c0b345f4b347283ed324c7449affd93353c5d743828" HandleID="k8s-pod-network.09cb96dde7fa01b7015a8c0b345f4b347283ed324c7449affd93353c5d743828" Workload="ci--4487.0.0--n--b9f348caa0-k8s-coredns--66bc5c9577--fsz2q-eth0" Nov 4 23:55:03.209438 containerd[1598]: 2025-11-04 23:55:03.145 [INFO][4166] cni-plugin/k8s.go 418: Populated endpoint ContainerID="09cb96dde7fa01b7015a8c0b345f4b347283ed324c7449affd93353c5d743828" Namespace="kube-system" Pod="coredns-66bc5c9577-fsz2q" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-coredns--66bc5c9577--fsz2q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--b9f348caa0-k8s-coredns--66bc5c9577--fsz2q-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ed949e47-9a3d-4f40-9a33-857a53b3cf70", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 54, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-b9f348caa0", ContainerID:"", Pod:"coredns-66bc5c9577-fsz2q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic68df1400ae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:55:03.209438 containerd[1598]: 2025-11-04 23:55:03.146 [INFO][4166] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.67/32] ContainerID="09cb96dde7fa01b7015a8c0b345f4b347283ed324c7449affd93353c5d743828" Namespace="kube-system" Pod="coredns-66bc5c9577-fsz2q" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-coredns--66bc5c9577--fsz2q-eth0" Nov 4 23:55:03.209438 containerd[1598]: 2025-11-04 23:55:03.147 [INFO][4166] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic68df1400ae ContainerID="09cb96dde7fa01b7015a8c0b345f4b347283ed324c7449affd93353c5d743828" Namespace="kube-system" Pod="coredns-66bc5c9577-fsz2q" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-coredns--66bc5c9577--fsz2q-eth0" Nov 4 23:55:03.209438 containerd[1598]: 2025-11-04 23:55:03.167 [INFO][4166] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09cb96dde7fa01b7015a8c0b345f4b347283ed324c7449affd93353c5d743828" Namespace="kube-system" Pod="coredns-66bc5c9577-fsz2q" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-coredns--66bc5c9577--fsz2q-eth0" Nov 4 23:55:03.209438 containerd[1598]: 2025-11-04 23:55:03.175 [INFO][4166] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="09cb96dde7fa01b7015a8c0b345f4b347283ed324c7449affd93353c5d743828" Namespace="kube-system" Pod="coredns-66bc5c9577-fsz2q" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-coredns--66bc5c9577--fsz2q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--b9f348caa0-k8s-coredns--66bc5c9577--fsz2q-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ed949e47-9a3d-4f40-9a33-857a53b3cf70", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 54, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-b9f348caa0", ContainerID:"09cb96dde7fa01b7015a8c0b345f4b347283ed324c7449affd93353c5d743828", Pod:"coredns-66bc5c9577-fsz2q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic68df1400ae", MAC:"0a:35:ef:9c:1b:c7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:55:03.211513 containerd[1598]: 2025-11-04 23:55:03.196 [INFO][4166] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="09cb96dde7fa01b7015a8c0b345f4b347283ed324c7449affd93353c5d743828" Namespace="kube-system" Pod="coredns-66bc5c9577-fsz2q" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-coredns--66bc5c9577--fsz2q-eth0" Nov 4 23:55:03.247703 containerd[1598]: time="2025-11-04T23:55:03.247134766Z" level=info msg="connecting to shim 09cb96dde7fa01b7015a8c0b345f4b347283ed324c7449affd93353c5d743828" address="unix:///run/containerd/s/b8b68a053d1df17bcfb257ecc00a99b53db624fd15e7e9195e1f9de06a8ca2f3" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:03.300595 containerd[1598]: time="2025-11-04T23:55:03.300167581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2npxc,Uid:26c41d20-6d80-4a04-a264-3af62c30ea8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"04c1f4bab157eeed8d1b8144600ae1e1c783afb73bd39139a61df5977115608b\"" Nov 4 23:55:03.302485 kubelet[2830]: E1104 23:55:03.302447 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:55:03.303005 systemd[1]: Started cri-containerd-09cb96dde7fa01b7015a8c0b345f4b347283ed324c7449affd93353c5d743828.scope - libcontainer container 09cb96dde7fa01b7015a8c0b345f4b347283ed324c7449affd93353c5d743828. Nov 4 23:55:03.315783 containerd[1598]: time="2025-11-04T23:55:03.315739469Z" level=info msg="CreateContainer within sandbox \"04c1f4bab157eeed8d1b8144600ae1e1c783afb73bd39139a61df5977115608b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 23:55:03.332813 containerd[1598]: time="2025-11-04T23:55:03.332733992Z" level=info msg="Container ec790c566e2f6565bdb9a3f005f9417b6a7e6d577892faad6543e089f876dba0: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:55:03.344473 containerd[1598]: time="2025-11-04T23:55:03.344373810Z" level=info msg="CreateContainer within sandbox \"04c1f4bab157eeed8d1b8144600ae1e1c783afb73bd39139a61df5977115608b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ec790c566e2f6565bdb9a3f005f9417b6a7e6d577892faad6543e089f876dba0\"" Nov 4 23:55:03.345753 containerd[1598]: time="2025-11-04T23:55:03.345690958Z" level=info msg="StartContainer for \"ec790c566e2f6565bdb9a3f005f9417b6a7e6d577892faad6543e089f876dba0\"" Nov 4 23:55:03.347368 containerd[1598]: time="2025-11-04T23:55:03.347295711Z" level=info msg="connecting to shim ec790c566e2f6565bdb9a3f005f9417b6a7e6d577892faad6543e089f876dba0" address="unix:///run/containerd/s/c75fed6c554743b439b2b8043edcb280e491612c97fdf0d450b400a9892242c8" protocol=ttrpc version=3 Nov 4 23:55:03.382637 systemd[1]: Started cri-containerd-ec790c566e2f6565bdb9a3f005f9417b6a7e6d577892faad6543e089f876dba0.scope - libcontainer container ec790c566e2f6565bdb9a3f005f9417b6a7e6d577892faad6543e089f876dba0. Nov 4 23:55:03.438240 containerd[1598]: time="2025-11-04T23:55:03.437801947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fsz2q,Uid:ed949e47-9a3d-4f40-9a33-857a53b3cf70,Namespace:kube-system,Attempt:0,} returns sandbox id \"09cb96dde7fa01b7015a8c0b345f4b347283ed324c7449affd93353c5d743828\"" Nov 4 23:55:03.441569 kubelet[2830]: E1104 23:55:03.441439 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:55:03.456775 containerd[1598]: time="2025-11-04T23:55:03.455614296Z" level=info msg="CreateContainer within sandbox \"09cb96dde7fa01b7015a8c0b345f4b347283ed324c7449affd93353c5d743828\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 23:55:03.485268 containerd[1598]: time="2025-11-04T23:55:03.484458233Z" level=info msg="Container 756523cdf25c308eb5731308f4b38e0a9aeb3f4cbc85e63ec48801d3a2360221: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:55:03.515423 containerd[1598]: time="2025-11-04T23:55:03.515350886Z" level=info msg="StartContainer for \"ec790c566e2f6565bdb9a3f005f9417b6a7e6d577892faad6543e089f876dba0\" returns successfully" Nov 4 23:55:03.519067 containerd[1598]: time="2025-11-04T23:55:03.518507129Z" level=info msg="CreateContainer within sandbox \"09cb96dde7fa01b7015a8c0b345f4b347283ed324c7449affd93353c5d743828\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"756523cdf25c308eb5731308f4b38e0a9aeb3f4cbc85e63ec48801d3a2360221\"" Nov 4 23:55:03.519725 containerd[1598]: time="2025-11-04T23:55:03.519457107Z" level=info msg="StartContainer for \"756523cdf25c308eb5731308f4b38e0a9aeb3f4cbc85e63ec48801d3a2360221\"" Nov 4 23:55:03.521465 containerd[1598]: time="2025-11-04T23:55:03.521420442Z" level=info msg="connecting to shim 756523cdf25c308eb5731308f4b38e0a9aeb3f4cbc85e63ec48801d3a2360221" address="unix:///run/containerd/s/b8b68a053d1df17bcfb257ecc00a99b53db624fd15e7e9195e1f9de06a8ca2f3" protocol=ttrpc version=3 Nov 4 23:55:03.561296 systemd[1]: Started cri-containerd-756523cdf25c308eb5731308f4b38e0a9aeb3f4cbc85e63ec48801d3a2360221.scope - libcontainer container 756523cdf25c308eb5731308f4b38e0a9aeb3f4cbc85e63ec48801d3a2360221. Nov 4 23:55:03.640223 containerd[1598]: time="2025-11-04T23:55:03.640167551Z" level=info msg="StartContainer for \"756523cdf25c308eb5731308f4b38e0a9aeb3f4cbc85e63ec48801d3a2360221\" returns successfully" Nov 4 23:55:03.642358 containerd[1598]: time="2025-11-04T23:55:03.642308401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bc8f8875-8jrl6,Uid:92120656-4e9c-41d6-aa85-513f1a7aea60,Namespace:calico-system,Attempt:0,}" Nov 4 23:55:03.898111 systemd-networkd[1493]: calid96a8144f65: Link UP Nov 4 23:55:03.901145 systemd-networkd[1493]: calid96a8144f65: Gained carrier Nov 4 23:55:03.947823 containerd[1598]: 2025-11-04 23:55:03.698 [INFO][4387] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 23:55:03.947823 containerd[1598]: 2025-11-04 23:55:03.726 [INFO][4387] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--n--b9f348caa0-k8s-calico--kube--controllers--7bc8f8875--8jrl6-eth0 calico-kube-controllers-7bc8f8875- calico-system 92120656-4e9c-41d6-aa85-513f1a7aea60 878 0 2025-11-04 23:54:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7bc8f8875 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4487.0.0-n-b9f348caa0 calico-kube-controllers-7bc8f8875-8jrl6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid96a8144f65 [] [] }} ContainerID="5484bf54a34e0b1a51b738671ecd08b7c015a44a56c5f0f58cc06c4749d040ee" Namespace="calico-system" Pod="calico-kube-controllers-7bc8f8875-8jrl6" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-calico--kube--controllers--7bc8f8875--8jrl6-" Nov 4 23:55:03.947823 containerd[1598]: 2025-11-04 23:55:03.726 [INFO][4387] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5484bf54a34e0b1a51b738671ecd08b7c015a44a56c5f0f58cc06c4749d040ee" Namespace="calico-system" Pod="calico-kube-controllers-7bc8f8875-8jrl6" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-calico--kube--controllers--7bc8f8875--8jrl6-eth0" Nov 4 23:55:03.947823 containerd[1598]: 2025-11-04 23:55:03.807 [INFO][4400] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5484bf54a34e0b1a51b738671ecd08b7c015a44a56c5f0f58cc06c4749d040ee" HandleID="k8s-pod-network.5484bf54a34e0b1a51b738671ecd08b7c015a44a56c5f0f58cc06c4749d040ee" Workload="ci--4487.0.0--n--b9f348caa0-k8s-calico--kube--controllers--7bc8f8875--8jrl6-eth0" Nov 4 23:55:03.947823 containerd[1598]: 2025-11-04 23:55:03.808 [INFO][4400] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5484bf54a34e0b1a51b738671ecd08b7c015a44a56c5f0f58cc06c4749d040ee" HandleID="k8s-pod-network.5484bf54a34e0b1a51b738671ecd08b7c015a44a56c5f0f58cc06c4749d040ee" Workload="ci--4487.0.0--n--b9f348caa0-k8s-calico--kube--controllers--7bc8f8875--8jrl6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d57b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.0-n-b9f348caa0", "pod":"calico-kube-controllers-7bc8f8875-8jrl6", "timestamp":"2025-11-04 23:55:03.807979435 +0000 UTC"}, Hostname:"ci-4487.0.0-n-b9f348caa0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:55:03.947823 containerd[1598]: 2025-11-04 23:55:03.809 [INFO][4400] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:55:03.947823 containerd[1598]: 2025-11-04 23:55:03.809 [INFO][4400] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:55:03.947823 containerd[1598]: 2025-11-04 23:55:03.809 [INFO][4400] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-n-b9f348caa0' Nov 4 23:55:03.947823 containerd[1598]: 2025-11-04 23:55:03.821 [INFO][4400] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5484bf54a34e0b1a51b738671ecd08b7c015a44a56c5f0f58cc06c4749d040ee" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:03.947823 containerd[1598]: 2025-11-04 23:55:03.829 [INFO][4400] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:03.947823 containerd[1598]: 2025-11-04 23:55:03.839 [INFO][4400] ipam/ipam.go 511: Trying affinity for 192.168.34.64/26 host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:03.947823 containerd[1598]: 2025-11-04 23:55:03.843 [INFO][4400] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.64/26 host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:03.947823 containerd[1598]: 2025-11-04 23:55:03.856 [INFO][4400] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:03.947823 containerd[1598]: 2025-11-04 23:55:03.856 [INFO][4400] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.5484bf54a34e0b1a51b738671ecd08b7c015a44a56c5f0f58cc06c4749d040ee" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:03.947823 containerd[1598]: 2025-11-04 23:55:03.860 [INFO][4400] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5484bf54a34e0b1a51b738671ecd08b7c015a44a56c5f0f58cc06c4749d040ee Nov 4 23:55:03.947823 containerd[1598]: 2025-11-04 23:55:03.877 [INFO][4400] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.5484bf54a34e0b1a51b738671ecd08b7c015a44a56c5f0f58cc06c4749d040ee" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:03.947823 containerd[1598]: 2025-11-04 23:55:03.888 [INFO][4400] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.68/26] block=192.168.34.64/26 handle="k8s-pod-network.5484bf54a34e0b1a51b738671ecd08b7c015a44a56c5f0f58cc06c4749d040ee" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:03.947823 containerd[1598]: 2025-11-04 23:55:03.888 [INFO][4400] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.68/26] handle="k8s-pod-network.5484bf54a34e0b1a51b738671ecd08b7c015a44a56c5f0f58cc06c4749d040ee" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:03.947823 containerd[1598]: 2025-11-04 23:55:03.888 [INFO][4400] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:55:03.947823 containerd[1598]: 2025-11-04 23:55:03.888 [INFO][4400] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.68/26] IPv6=[] ContainerID="5484bf54a34e0b1a51b738671ecd08b7c015a44a56c5f0f58cc06c4749d040ee" HandleID="k8s-pod-network.5484bf54a34e0b1a51b738671ecd08b7c015a44a56c5f0f58cc06c4749d040ee" Workload="ci--4487.0.0--n--b9f348caa0-k8s-calico--kube--controllers--7bc8f8875--8jrl6-eth0" Nov 4 23:55:03.950203 containerd[1598]: 2025-11-04 23:55:03.892 [INFO][4387] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5484bf54a34e0b1a51b738671ecd08b7c015a44a56c5f0f58cc06c4749d040ee" Namespace="calico-system" Pod="calico-kube-controllers-7bc8f8875-8jrl6" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-calico--kube--controllers--7bc8f8875--8jrl6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--b9f348caa0-k8s-calico--kube--controllers--7bc8f8875--8jrl6-eth0", GenerateName:"calico-kube-controllers-7bc8f8875-", Namespace:"calico-system", SelfLink:"", UID:"92120656-4e9c-41d6-aa85-513f1a7aea60", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 54, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bc8f8875", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-b9f348caa0", ContainerID:"", Pod:"calico-kube-controllers-7bc8f8875-8jrl6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid96a8144f65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:55:03.950203 containerd[1598]: 2025-11-04 23:55:03.892 [INFO][4387] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.68/32] ContainerID="5484bf54a34e0b1a51b738671ecd08b7c015a44a56c5f0f58cc06c4749d040ee" Namespace="calico-system" Pod="calico-kube-controllers-7bc8f8875-8jrl6" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-calico--kube--controllers--7bc8f8875--8jrl6-eth0" Nov 4 23:55:03.950203 containerd[1598]: 2025-11-04 23:55:03.892 [INFO][4387] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid96a8144f65 ContainerID="5484bf54a34e0b1a51b738671ecd08b7c015a44a56c5f0f58cc06c4749d040ee" Namespace="calico-system" Pod="calico-kube-controllers-7bc8f8875-8jrl6" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-calico--kube--controllers--7bc8f8875--8jrl6-eth0" Nov 4 23:55:03.950203 containerd[1598]: 2025-11-04 23:55:03.902 [INFO][4387] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5484bf54a34e0b1a51b738671ecd08b7c015a44a56c5f0f58cc06c4749d040ee" Namespace="calico-system" Pod="calico-kube-controllers-7bc8f8875-8jrl6" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-calico--kube--controllers--7bc8f8875--8jrl6-eth0" Nov 4 23:55:03.950203 containerd[1598]: 2025-11-04 23:55:03.904 [INFO][4387] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5484bf54a34e0b1a51b738671ecd08b7c015a44a56c5f0f58cc06c4749d040ee" Namespace="calico-system" Pod="calico-kube-controllers-7bc8f8875-8jrl6" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-calico--kube--controllers--7bc8f8875--8jrl6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--b9f348caa0-k8s-calico--kube--controllers--7bc8f8875--8jrl6-eth0", GenerateName:"calico-kube-controllers-7bc8f8875-", Namespace:"calico-system", SelfLink:"", UID:"92120656-4e9c-41d6-aa85-513f1a7aea60", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 54, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bc8f8875", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-b9f348caa0", ContainerID:"5484bf54a34e0b1a51b738671ecd08b7c015a44a56c5f0f58cc06c4749d040ee", Pod:"calico-kube-controllers-7bc8f8875-8jrl6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid96a8144f65", MAC:"aa:16:80:68:65:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:55:03.950203 containerd[1598]: 2025-11-04 23:55:03.944 [INFO][4387] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5484bf54a34e0b1a51b738671ecd08b7c015a44a56c5f0f58cc06c4749d040ee" Namespace="calico-system" Pod="calico-kube-controllers-7bc8f8875-8jrl6" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-calico--kube--controllers--7bc8f8875--8jrl6-eth0" Nov 4 23:55:04.000186 kubelet[2830]: E1104 23:55:04.000093 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:55:04.016583 kubelet[2830]: E1104 23:55:04.016549 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:55:04.018928 containerd[1598]: time="2025-11-04T23:55:04.018862119Z" level=info msg="connecting to shim 5484bf54a34e0b1a51b738671ecd08b7c015a44a56c5f0f58cc06c4749d040ee" address="unix:///run/containerd/s/8457e3e6bfb13beb63b9f7870fdba7865bb9649dc3647fe8d136cccc14b32cde" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:04.113458 systemd[1]: Started cri-containerd-5484bf54a34e0b1a51b738671ecd08b7c015a44a56c5f0f58cc06c4749d040ee.scope - libcontainer container 5484bf54a34e0b1a51b738671ecd08b7c015a44a56c5f0f58cc06c4749d040ee. Nov 4 23:55:04.130123 kubelet[2830]: I1104 23:55:04.130042 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2npxc" podStartSLOduration=41.130017262 podStartE2EDuration="41.130017262s" podCreationTimestamp="2025-11-04 23:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:55:04.129027321 +0000 UTC m=+45.708501717" watchObservedRunningTime="2025-11-04 23:55:04.130017262 +0000 UTC m=+45.709491659" Nov 4 23:55:04.130422 kubelet[2830]: I1104 23:55:04.130199 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-fsz2q" podStartSLOduration=41.130192817 podStartE2EDuration="41.130192817s" podCreationTimestamp="2025-11-04 23:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:55:04.086355396 +0000 UTC m=+45.665829801" watchObservedRunningTime="2025-11-04 23:55:04.130192817 +0000 UTC m=+45.709667257" Nov 4 23:55:04.310307 containerd[1598]: time="2025-11-04T23:55:04.310215186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bc8f8875-8jrl6,Uid:92120656-4e9c-41d6-aa85-513f1a7aea60,Namespace:calico-system,Attempt:0,} returns sandbox id \"5484bf54a34e0b1a51b738671ecd08b7c015a44a56c5f0f58cc06c4749d040ee\"" Nov 4 23:55:04.316573 containerd[1598]: time="2025-11-04T23:55:04.316506177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 23:55:04.458140 systemd-networkd[1493]: calic68df1400ae: Gained IPv6LL Nov 4 23:55:04.637163 containerd[1598]: time="2025-11-04T23:55:04.637039426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-577ff57f97-8frfn,Uid:72fd9e54-497f-4204-80c1-9f81d06cb75e,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:55:04.729736 containerd[1598]: time="2025-11-04T23:55:04.729689441Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:04.730919 containerd[1598]: time="2025-11-04T23:55:04.730842052Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 23:55:04.731448 containerd[1598]: time="2025-11-04T23:55:04.730988388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 4 23:55:04.731978 kubelet[2830]: E1104 23:55:04.731655 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:55:04.732119 kubelet[2830]: E1104 23:55:04.732000 2830 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:55:04.733810 kubelet[2830]: E1104 23:55:04.732291 2830 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7bc8f8875-8jrl6_calico-system(92120656-4e9c-41d6-aa85-513f1a7aea60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:04.733810 kubelet[2830]: E1104 23:55:04.732347 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bc8f8875-8jrl6" podUID="92120656-4e9c-41d6-aa85-513f1a7aea60" Nov 4 23:55:04.744140 kubelet[2830]: I1104 23:55:04.744016 2830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 23:55:04.747706 kubelet[2830]: E1104 23:55:04.745239 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:55:04.888825 systemd-networkd[1493]: calic3a65e57b3c: Link UP Nov 4 23:55:04.891363 systemd-networkd[1493]: calic3a65e57b3c: Gained carrier Nov 4 23:55:04.913992 containerd[1598]: 2025-11-04 23:55:04.692 [INFO][4474] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 23:55:04.913992 containerd[1598]: 2025-11-04 23:55:04.709 [INFO][4474] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--n--b9f348caa0-k8s-calico--apiserver--577ff57f97--8frfn-eth0 calico-apiserver-577ff57f97- calico-apiserver 72fd9e54-497f-4204-80c1-9f81d06cb75e 879 0 2025-11-04 23:54:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:577ff57f97 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4487.0.0-n-b9f348caa0 calico-apiserver-577ff57f97-8frfn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic3a65e57b3c [] [] }} ContainerID="b279f9eca91ac2b97fc27be6a0fa782cda704aa0b10bf6c64cf391fa6aec18eb" Namespace="calico-apiserver" Pod="calico-apiserver-577ff57f97-8frfn" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-calico--apiserver--577ff57f97--8frfn-" Nov 4 23:55:04.913992 containerd[1598]: 2025-11-04 23:55:04.709 [INFO][4474] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b279f9eca91ac2b97fc27be6a0fa782cda704aa0b10bf6c64cf391fa6aec18eb" Namespace="calico-apiserver" Pod="calico-apiserver-577ff57f97-8frfn" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-calico--apiserver--577ff57f97--8frfn-eth0" Nov 4 23:55:04.913992 containerd[1598]: 2025-11-04 23:55:04.778 [INFO][4487] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b279f9eca91ac2b97fc27be6a0fa782cda704aa0b10bf6c64cf391fa6aec18eb" HandleID="k8s-pod-network.b279f9eca91ac2b97fc27be6a0fa782cda704aa0b10bf6c64cf391fa6aec18eb" Workload="ci--4487.0.0--n--b9f348caa0-k8s-calico--apiserver--577ff57f97--8frfn-eth0" Nov 4 23:55:04.913992 containerd[1598]: 2025-11-04 23:55:04.779 [INFO][4487] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b279f9eca91ac2b97fc27be6a0fa782cda704aa0b10bf6c64cf391fa6aec18eb" HandleID="k8s-pod-network.b279f9eca91ac2b97fc27be6a0fa782cda704aa0b10bf6c64cf391fa6aec18eb" Workload="ci--4487.0.0--n--b9f348caa0-k8s-calico--apiserver--577ff57f97--8frfn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4487.0.0-n-b9f348caa0", "pod":"calico-apiserver-577ff57f97-8frfn", "timestamp":"2025-11-04 23:55:04.778235606 +0000 UTC"}, Hostname:"ci-4487.0.0-n-b9f348caa0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:55:04.913992 containerd[1598]: 2025-11-04 23:55:04.779 [INFO][4487] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:55:04.913992 containerd[1598]: 2025-11-04 23:55:04.779 [INFO][4487] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:55:04.913992 containerd[1598]: 2025-11-04 23:55:04.779 [INFO][4487] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-n-b9f348caa0' Nov 4 23:55:04.913992 containerd[1598]: 2025-11-04 23:55:04.796 [INFO][4487] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b279f9eca91ac2b97fc27be6a0fa782cda704aa0b10bf6c64cf391fa6aec18eb" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:04.913992 containerd[1598]: 2025-11-04 23:55:04.809 [INFO][4487] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:04.913992 containerd[1598]: 2025-11-04 23:55:04.822 [INFO][4487] ipam/ipam.go 511: Trying affinity for 192.168.34.64/26 host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:04.913992 containerd[1598]: 2025-11-04 23:55:04.830 [INFO][4487] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.64/26 host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:04.913992 containerd[1598]: 2025-11-04 23:55:04.837 [INFO][4487] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:04.913992 containerd[1598]: 2025-11-04 23:55:04.837 [INFO][4487] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.b279f9eca91ac2b97fc27be6a0fa782cda704aa0b10bf6c64cf391fa6aec18eb" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:04.913992 containerd[1598]: 2025-11-04 23:55:04.849 [INFO][4487] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b279f9eca91ac2b97fc27be6a0fa782cda704aa0b10bf6c64cf391fa6aec18eb Nov 4 23:55:04.913992 containerd[1598]: 2025-11-04 23:55:04.860 [INFO][4487] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.b279f9eca91ac2b97fc27be6a0fa782cda704aa0b10bf6c64cf391fa6aec18eb" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:04.913992 containerd[1598]: 2025-11-04 23:55:04.877 [INFO][4487] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.69/26] block=192.168.34.64/26 handle="k8s-pod-network.b279f9eca91ac2b97fc27be6a0fa782cda704aa0b10bf6c64cf391fa6aec18eb" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:04.913992 containerd[1598]: 2025-11-04 23:55:04.878 [INFO][4487] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.69/26] handle="k8s-pod-network.b279f9eca91ac2b97fc27be6a0fa782cda704aa0b10bf6c64cf391fa6aec18eb" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:04.913992 containerd[1598]: 2025-11-04 23:55:04.878 [INFO][4487] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:55:04.913992 containerd[1598]: 2025-11-04 23:55:04.878 [INFO][4487] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.69/26] IPv6=[] ContainerID="b279f9eca91ac2b97fc27be6a0fa782cda704aa0b10bf6c64cf391fa6aec18eb" HandleID="k8s-pod-network.b279f9eca91ac2b97fc27be6a0fa782cda704aa0b10bf6c64cf391fa6aec18eb" Workload="ci--4487.0.0--n--b9f348caa0-k8s-calico--apiserver--577ff57f97--8frfn-eth0" Nov 4 23:55:04.915881 containerd[1598]: 2025-11-04 23:55:04.882 [INFO][4474] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b279f9eca91ac2b97fc27be6a0fa782cda704aa0b10bf6c64cf391fa6aec18eb" Namespace="calico-apiserver" Pod="calico-apiserver-577ff57f97-8frfn" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-calico--apiserver--577ff57f97--8frfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--b9f348caa0-k8s-calico--apiserver--577ff57f97--8frfn-eth0", GenerateName:"calico-apiserver-577ff57f97-", Namespace:"calico-apiserver", SelfLink:"", UID:"72fd9e54-497f-4204-80c1-9f81d06cb75e", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"577ff57f97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-b9f348caa0", ContainerID:"", Pod:"calico-apiserver-577ff57f97-8frfn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic3a65e57b3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:55:04.915881 containerd[1598]: 2025-11-04 23:55:04.883 [INFO][4474] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.69/32] ContainerID="b279f9eca91ac2b97fc27be6a0fa782cda704aa0b10bf6c64cf391fa6aec18eb" Namespace="calico-apiserver" Pod="calico-apiserver-577ff57f97-8frfn" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-calico--apiserver--577ff57f97--8frfn-eth0" Nov 4 23:55:04.915881 containerd[1598]: 2025-11-04 23:55:04.883 [INFO][4474] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic3a65e57b3c ContainerID="b279f9eca91ac2b97fc27be6a0fa782cda704aa0b10bf6c64cf391fa6aec18eb" Namespace="calico-apiserver" Pod="calico-apiserver-577ff57f97-8frfn" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-calico--apiserver--577ff57f97--8frfn-eth0" Nov 4 23:55:04.915881 containerd[1598]: 2025-11-04 23:55:04.893 [INFO][4474] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b279f9eca91ac2b97fc27be6a0fa782cda704aa0b10bf6c64cf391fa6aec18eb" Namespace="calico-apiserver" Pod="calico-apiserver-577ff57f97-8frfn" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-calico--apiserver--577ff57f97--8frfn-eth0" Nov 4 23:55:04.915881 containerd[1598]: 2025-11-04 23:55:04.893 [INFO][4474] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b279f9eca91ac2b97fc27be6a0fa782cda704aa0b10bf6c64cf391fa6aec18eb" Namespace="calico-apiserver" Pod="calico-apiserver-577ff57f97-8frfn" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-calico--apiserver--577ff57f97--8frfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--b9f348caa0-k8s-calico--apiserver--577ff57f97--8frfn-eth0", GenerateName:"calico-apiserver-577ff57f97-", Namespace:"calico-apiserver", SelfLink:"", UID:"72fd9e54-497f-4204-80c1-9f81d06cb75e", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"577ff57f97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-b9f348caa0", ContainerID:"b279f9eca91ac2b97fc27be6a0fa782cda704aa0b10bf6c64cf391fa6aec18eb", Pod:"calico-apiserver-577ff57f97-8frfn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic3a65e57b3c", MAC:"0a:38:1f:a8:33:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:55:04.915881 containerd[1598]: 2025-11-04 23:55:04.908 [INFO][4474] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b279f9eca91ac2b97fc27be6a0fa782cda704aa0b10bf6c64cf391fa6aec18eb" Namespace="calico-apiserver" Pod="calico-apiserver-577ff57f97-8frfn" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-calico--apiserver--577ff57f97--8frfn-eth0" Nov 4 23:55:04.962383 containerd[1598]: time="2025-11-04T23:55:04.961529631Z" level=info msg="connecting to shim b279f9eca91ac2b97fc27be6a0fa782cda704aa0b10bf6c64cf391fa6aec18eb" address="unix:///run/containerd/s/b6b1a0d8b8d8dae233b2067498ee3caea9d8c9fdecd9ce681d1305821080132a" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:04.971037 systemd-networkd[1493]: cali8b6a7bc900a: Gained IPv6LL Nov 4 23:55:05.003019 systemd[1]: Started cri-containerd-b279f9eca91ac2b97fc27be6a0fa782cda704aa0b10bf6c64cf391fa6aec18eb.scope - libcontainer container b279f9eca91ac2b97fc27be6a0fa782cda704aa0b10bf6c64cf391fa6aec18eb. Nov 4 23:55:05.021161 kubelet[2830]: E1104 23:55:05.021125 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:55:05.023341 kubelet[2830]: E1104 23:55:05.023199 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bc8f8875-8jrl6" podUID="92120656-4e9c-41d6-aa85-513f1a7aea60" Nov 4 23:55:05.024980 kubelet[2830]: E1104 23:55:05.024906 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:55:05.025512 kubelet[2830]: E1104 23:55:05.025448 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:55:05.103339 containerd[1598]: time="2025-11-04T23:55:05.103186268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-577ff57f97-8frfn,Uid:72fd9e54-497f-4204-80c1-9f81d06cb75e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b279f9eca91ac2b97fc27be6a0fa782cda704aa0b10bf6c64cf391fa6aec18eb\"" Nov 4 23:55:05.107883 containerd[1598]: time="2025-11-04T23:55:05.107747717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:55:05.412286 containerd[1598]: time="2025-11-04T23:55:05.411979325Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:05.414214 containerd[1598]: time="2025-11-04T23:55:05.413903387Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:55:05.414615 containerd[1598]: time="2025-11-04T23:55:05.414116695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:55:05.414987 kubelet[2830]: E1104 23:55:05.414936 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:55:05.414987 kubelet[2830]: E1104 23:55:05.414987 2830 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:55:05.415174 kubelet[2830]: E1104 23:55:05.415110 2830 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-577ff57f97-8frfn_calico-apiserver(72fd9e54-497f-4204-80c1-9f81d06cb75e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:05.415174 kubelet[2830]: E1104 23:55:05.415148 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-577ff57f97-8frfn" podUID="72fd9e54-497f-4204-80c1-9f81d06cb75e" Nov 4 23:55:05.637411 containerd[1598]: time="2025-11-04T23:55:05.637347168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-hx5ms,Uid:24a3a22d-e704-4f02-8408-ca1de6f232f0,Namespace:calico-system,Attempt:0,}" Nov 4 23:55:05.639179 containerd[1598]: time="2025-11-04T23:55:05.638482378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-577ff57f97-lhcd2,Uid:6addfd64-c562-4f9f-bb9f-581ad89a73d8,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:55:05.866551 systemd-networkd[1493]: calid96a8144f65: Gained IPv6LL Nov 4 23:55:05.986897 systemd-networkd[1493]: cali6062d9cd83e: Link UP Nov 4 23:55:05.990596 systemd-networkd[1493]: cali6062d9cd83e: Gained carrier Nov 4 23:55:06.040757 containerd[1598]: 2025-11-04 23:55:05.772 [INFO][4595] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--n--b9f348caa0-k8s-calico--apiserver--577ff57f97--lhcd2-eth0 calico-apiserver-577ff57f97- calico-apiserver 6addfd64-c562-4f9f-bb9f-581ad89a73d8 881 0 2025-11-04 23:54:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:577ff57f97 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4487.0.0-n-b9f348caa0 calico-apiserver-577ff57f97-lhcd2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6062d9cd83e [] [] }} ContainerID="a0caa316f3ce1440186cc97a69a768465cdb0947e2a7bfc7f7794a7160ea2aac" Namespace="calico-apiserver" Pod="calico-apiserver-577ff57f97-lhcd2" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-calico--apiserver--577ff57f97--lhcd2-" Nov 4 23:55:06.040757 containerd[1598]: 2025-11-04 23:55:05.772 [INFO][4595] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a0caa316f3ce1440186cc97a69a768465cdb0947e2a7bfc7f7794a7160ea2aac" Namespace="calico-apiserver" Pod="calico-apiserver-577ff57f97-lhcd2" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-calico--apiserver--577ff57f97--lhcd2-eth0" Nov 4 23:55:06.040757 containerd[1598]: 2025-11-04 23:55:05.902 [INFO][4630] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0caa316f3ce1440186cc97a69a768465cdb0947e2a7bfc7f7794a7160ea2aac" HandleID="k8s-pod-network.a0caa316f3ce1440186cc97a69a768465cdb0947e2a7bfc7f7794a7160ea2aac" Workload="ci--4487.0.0--n--b9f348caa0-k8s-calico--apiserver--577ff57f97--lhcd2-eth0" Nov 4 23:55:06.040757 containerd[1598]: 2025-11-04 23:55:05.904 [INFO][4630] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a0caa316f3ce1440186cc97a69a768465cdb0947e2a7bfc7f7794a7160ea2aac" HandleID="k8s-pod-network.a0caa316f3ce1440186cc97a69a768465cdb0947e2a7bfc7f7794a7160ea2aac" Workload="ci--4487.0.0--n--b9f348caa0-k8s-calico--apiserver--577ff57f97--lhcd2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000353ac0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4487.0.0-n-b9f348caa0", "pod":"calico-apiserver-577ff57f97-lhcd2", "timestamp":"2025-11-04 23:55:05.902897945 +0000 UTC"}, Hostname:"ci-4487.0.0-n-b9f348caa0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:55:06.040757 containerd[1598]: 2025-11-04 23:55:05.904 [INFO][4630] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:55:06.040757 containerd[1598]: 2025-11-04 23:55:05.905 [INFO][4630] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:55:06.040757 containerd[1598]: 2025-11-04 23:55:05.905 [INFO][4630] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-n-b9f348caa0' Nov 4 23:55:06.040757 containerd[1598]: 2025-11-04 23:55:05.920 [INFO][4630] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a0caa316f3ce1440186cc97a69a768465cdb0947e2a7bfc7f7794a7160ea2aac" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:06.040757 containerd[1598]: 2025-11-04 23:55:05.931 [INFO][4630] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:06.040757 containerd[1598]: 2025-11-04 23:55:05.940 [INFO][4630] ipam/ipam.go 511: Trying affinity for 192.168.34.64/26 host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:06.040757 containerd[1598]: 2025-11-04 23:55:05.944 [INFO][4630] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.64/26 host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:06.040757 containerd[1598]: 2025-11-04 23:55:05.948 [INFO][4630] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:06.040757 containerd[1598]: 2025-11-04 23:55:05.948 [INFO][4630] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.a0caa316f3ce1440186cc97a69a768465cdb0947e2a7bfc7f7794a7160ea2aac" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:06.040757 containerd[1598]: 2025-11-04 23:55:05.953 [INFO][4630] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a0caa316f3ce1440186cc97a69a768465cdb0947e2a7bfc7f7794a7160ea2aac Nov 4 23:55:06.040757 containerd[1598]: 2025-11-04 23:55:05.963 [INFO][4630] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.a0caa316f3ce1440186cc97a69a768465cdb0947e2a7bfc7f7794a7160ea2aac" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:06.040757 containerd[1598]: 2025-11-04 23:55:05.974 [INFO][4630] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.70/26] block=192.168.34.64/26 handle="k8s-pod-network.a0caa316f3ce1440186cc97a69a768465cdb0947e2a7bfc7f7794a7160ea2aac" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:06.040757 containerd[1598]: 2025-11-04 23:55:05.974 [INFO][4630] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.70/26] handle="k8s-pod-network.a0caa316f3ce1440186cc97a69a768465cdb0947e2a7bfc7f7794a7160ea2aac" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:06.040757 containerd[1598]: 2025-11-04 23:55:05.975 [INFO][4630] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:55:06.040757 containerd[1598]: 2025-11-04 23:55:05.975 [INFO][4630] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.70/26] IPv6=[] ContainerID="a0caa316f3ce1440186cc97a69a768465cdb0947e2a7bfc7f7794a7160ea2aac" HandleID="k8s-pod-network.a0caa316f3ce1440186cc97a69a768465cdb0947e2a7bfc7f7794a7160ea2aac" Workload="ci--4487.0.0--n--b9f348caa0-k8s-calico--apiserver--577ff57f97--lhcd2-eth0" Nov 4 23:55:06.044522 containerd[1598]: 2025-11-04 23:55:05.981 [INFO][4595] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a0caa316f3ce1440186cc97a69a768465cdb0947e2a7bfc7f7794a7160ea2aac" Namespace="calico-apiserver" Pod="calico-apiserver-577ff57f97-lhcd2" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-calico--apiserver--577ff57f97--lhcd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--b9f348caa0-k8s-calico--apiserver--577ff57f97--lhcd2-eth0", GenerateName:"calico-apiserver-577ff57f97-", Namespace:"calico-apiserver", SelfLink:"", UID:"6addfd64-c562-4f9f-bb9f-581ad89a73d8", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"577ff57f97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-b9f348caa0", ContainerID:"", Pod:"calico-apiserver-577ff57f97-lhcd2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6062d9cd83e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:55:06.044522 containerd[1598]: 2025-11-04 23:55:05.981 [INFO][4595] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.70/32] ContainerID="a0caa316f3ce1440186cc97a69a768465cdb0947e2a7bfc7f7794a7160ea2aac" Namespace="calico-apiserver" Pod="calico-apiserver-577ff57f97-lhcd2" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-calico--apiserver--577ff57f97--lhcd2-eth0" Nov 4 23:55:06.044522 containerd[1598]: 2025-11-04 23:55:05.982 [INFO][4595] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6062d9cd83e ContainerID="a0caa316f3ce1440186cc97a69a768465cdb0947e2a7bfc7f7794a7160ea2aac" Namespace="calico-apiserver" Pod="calico-apiserver-577ff57f97-lhcd2" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-calico--apiserver--577ff57f97--lhcd2-eth0" Nov 4 23:55:06.044522 containerd[1598]: 2025-11-04 23:55:05.995 [INFO][4595] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0caa316f3ce1440186cc97a69a768465cdb0947e2a7bfc7f7794a7160ea2aac" Namespace="calico-apiserver" Pod="calico-apiserver-577ff57f97-lhcd2" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-calico--apiserver--577ff57f97--lhcd2-eth0" Nov 4 23:55:06.044522 containerd[1598]: 2025-11-04 23:55:05.999 [INFO][4595] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a0caa316f3ce1440186cc97a69a768465cdb0947e2a7bfc7f7794a7160ea2aac" Namespace="calico-apiserver" Pod="calico-apiserver-577ff57f97-lhcd2" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-calico--apiserver--577ff57f97--lhcd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--b9f348caa0-k8s-calico--apiserver--577ff57f97--lhcd2-eth0", GenerateName:"calico-apiserver-577ff57f97-", Namespace:"calico-apiserver", SelfLink:"", UID:"6addfd64-c562-4f9f-bb9f-581ad89a73d8", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"577ff57f97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-b9f348caa0", ContainerID:"a0caa316f3ce1440186cc97a69a768465cdb0947e2a7bfc7f7794a7160ea2aac", Pod:"calico-apiserver-577ff57f97-lhcd2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6062d9cd83e", MAC:"b2:87:5e:30:87:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:55:06.044522 containerd[1598]: 2025-11-04 23:55:06.024 [INFO][4595] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a0caa316f3ce1440186cc97a69a768465cdb0947e2a7bfc7f7794a7160ea2aac" Namespace="calico-apiserver" Pod="calico-apiserver-577ff57f97-lhcd2" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-calico--apiserver--577ff57f97--lhcd2-eth0" Nov 4 23:55:06.048163 kubelet[2830]: E1104 23:55:06.045185 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:55:06.050212 kubelet[2830]: E1104 23:55:06.050165 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:55:06.052758 kubelet[2830]: E1104 23:55:06.052712 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bc8f8875-8jrl6" podUID="92120656-4e9c-41d6-aa85-513f1a7aea60" Nov 4 23:55:06.054253 kubelet[2830]: E1104 23:55:06.053937 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-577ff57f97-8frfn" podUID="72fd9e54-497f-4204-80c1-9f81d06cb75e" Nov 4 23:55:06.110374 containerd[1598]: time="2025-11-04T23:55:06.110148320Z" level=info msg="connecting to shim a0caa316f3ce1440186cc97a69a768465cdb0947e2a7bfc7f7794a7160ea2aac" address="unix:///run/containerd/s/12348bf90ed5e1cf0d6a3d2c7818e9d9c3fb333c3886969dbbb9e91dea677e56" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:06.164042 systemd-networkd[1493]: calib85aeceb5c7: Link UP Nov 4 23:55:06.164297 systemd-networkd[1493]: calib85aeceb5c7: Gained carrier Nov 4 23:55:06.180943 systemd[1]: Started cri-containerd-a0caa316f3ce1440186cc97a69a768465cdb0947e2a7bfc7f7794a7160ea2aac.scope - libcontainer container a0caa316f3ce1440186cc97a69a768465cdb0947e2a7bfc7f7794a7160ea2aac. Nov 4 23:55:06.198082 containerd[1598]: 2025-11-04 23:55:05.805 [INFO][4606] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--n--b9f348caa0-k8s-goldmane--7c778bb748--hx5ms-eth0 goldmane-7c778bb748- calico-system 24a3a22d-e704-4f02-8408-ca1de6f232f0 875 0 2025-11-04 23:54:39 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4487.0.0-n-b9f348caa0 goldmane-7c778bb748-hx5ms eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib85aeceb5c7 [] [] }} ContainerID="64c89973919f7b43473ac2139529d796db90ed440cd1749f6797c6cdad9b8b64" Namespace="calico-system" Pod="goldmane-7c778bb748-hx5ms" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-goldmane--7c778bb748--hx5ms-" Nov 4 23:55:06.198082 containerd[1598]: 2025-11-04 23:55:05.806 [INFO][4606] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="64c89973919f7b43473ac2139529d796db90ed440cd1749f6797c6cdad9b8b64" Namespace="calico-system" Pod="goldmane-7c778bb748-hx5ms" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-goldmane--7c778bb748--hx5ms-eth0" Nov 4 23:55:06.198082 containerd[1598]: 2025-11-04 23:55:05.928 [INFO][4636] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="64c89973919f7b43473ac2139529d796db90ed440cd1749f6797c6cdad9b8b64" HandleID="k8s-pod-network.64c89973919f7b43473ac2139529d796db90ed440cd1749f6797c6cdad9b8b64" Workload="ci--4487.0.0--n--b9f348caa0-k8s-goldmane--7c778bb748--hx5ms-eth0" Nov 4 23:55:06.198082 containerd[1598]: 2025-11-04 23:55:05.929 [INFO][4636] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="64c89973919f7b43473ac2139529d796db90ed440cd1749f6797c6cdad9b8b64" HandleID="k8s-pod-network.64c89973919f7b43473ac2139529d796db90ed440cd1749f6797c6cdad9b8b64" Workload="ci--4487.0.0--n--b9f348caa0-k8s-goldmane--7c778bb748--hx5ms-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f2a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.0-n-b9f348caa0", "pod":"goldmane-7c778bb748-hx5ms", "timestamp":"2025-11-04 23:55:05.928648785 +0000 UTC"}, Hostname:"ci-4487.0.0-n-b9f348caa0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:55:06.198082 containerd[1598]: 2025-11-04 23:55:05.929 [INFO][4636] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:55:06.198082 containerd[1598]: 2025-11-04 23:55:05.975 [INFO][4636] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:55:06.198082 containerd[1598]: 2025-11-04 23:55:05.975 [INFO][4636] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-n-b9f348caa0' Nov 4 23:55:06.198082 containerd[1598]: 2025-11-04 23:55:06.027 [INFO][4636] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.64c89973919f7b43473ac2139529d796db90ed440cd1749f6797c6cdad9b8b64" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:06.198082 containerd[1598]: 2025-11-04 23:55:06.049 [INFO][4636] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:06.198082 containerd[1598]: 2025-11-04 23:55:06.069 [INFO][4636] ipam/ipam.go 511: Trying affinity for 192.168.34.64/26 host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:06.198082 containerd[1598]: 2025-11-04 23:55:06.079 [INFO][4636] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.64/26 host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:06.198082 containerd[1598]: 2025-11-04 23:55:06.089 [INFO][4636] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:06.198082 containerd[1598]: 2025-11-04 23:55:06.089 [INFO][4636] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.64c89973919f7b43473ac2139529d796db90ed440cd1749f6797c6cdad9b8b64" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:06.198082 containerd[1598]: 2025-11-04 23:55:06.100 [INFO][4636] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.64c89973919f7b43473ac2139529d796db90ed440cd1749f6797c6cdad9b8b64 Nov 4 23:55:06.198082 containerd[1598]: 2025-11-04 23:55:06.117 [INFO][4636] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.64c89973919f7b43473ac2139529d796db90ed440cd1749f6797c6cdad9b8b64" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:06.198082 containerd[1598]: 2025-11-04 23:55:06.142 [INFO][4636] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.71/26] block=192.168.34.64/26 handle="k8s-pod-network.64c89973919f7b43473ac2139529d796db90ed440cd1749f6797c6cdad9b8b64" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:06.198082 containerd[1598]: 2025-11-04 23:55:06.142 [INFO][4636] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.71/26] handle="k8s-pod-network.64c89973919f7b43473ac2139529d796db90ed440cd1749f6797c6cdad9b8b64" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:06.198082 containerd[1598]: 2025-11-04 23:55:06.142 [INFO][4636] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:55:06.198082 containerd[1598]: 2025-11-04 23:55:06.143 [INFO][4636] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.71/26] IPv6=[] ContainerID="64c89973919f7b43473ac2139529d796db90ed440cd1749f6797c6cdad9b8b64" HandleID="k8s-pod-network.64c89973919f7b43473ac2139529d796db90ed440cd1749f6797c6cdad9b8b64" Workload="ci--4487.0.0--n--b9f348caa0-k8s-goldmane--7c778bb748--hx5ms-eth0" Nov 4 23:55:06.198805 containerd[1598]: 2025-11-04 23:55:06.152 [INFO][4606] cni-plugin/k8s.go 418: Populated endpoint ContainerID="64c89973919f7b43473ac2139529d796db90ed440cd1749f6797c6cdad9b8b64" Namespace="calico-system" Pod="goldmane-7c778bb748-hx5ms" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-goldmane--7c778bb748--hx5ms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--b9f348caa0-k8s-goldmane--7c778bb748--hx5ms-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"24a3a22d-e704-4f02-8408-ca1de6f232f0", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 54, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-b9f348caa0", ContainerID:"", Pod:"goldmane-7c778bb748-hx5ms", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.34.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib85aeceb5c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:55:06.198805 containerd[1598]: 2025-11-04 23:55:06.153 [INFO][4606] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.71/32] ContainerID="64c89973919f7b43473ac2139529d796db90ed440cd1749f6797c6cdad9b8b64" Namespace="calico-system" Pod="goldmane-7c778bb748-hx5ms" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-goldmane--7c778bb748--hx5ms-eth0" Nov 4 23:55:06.198805 containerd[1598]: 2025-11-04 23:55:06.154 [INFO][4606] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib85aeceb5c7 ContainerID="64c89973919f7b43473ac2139529d796db90ed440cd1749f6797c6cdad9b8b64" Namespace="calico-system" Pod="goldmane-7c778bb748-hx5ms" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-goldmane--7c778bb748--hx5ms-eth0" Nov 4 23:55:06.198805 containerd[1598]: 2025-11-04 23:55:06.162 [INFO][4606] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="64c89973919f7b43473ac2139529d796db90ed440cd1749f6797c6cdad9b8b64" Namespace="calico-system" Pod="goldmane-7c778bb748-hx5ms" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-goldmane--7c778bb748--hx5ms-eth0" Nov 4 23:55:06.198805 containerd[1598]: 2025-11-04 23:55:06.169 [INFO][4606] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="64c89973919f7b43473ac2139529d796db90ed440cd1749f6797c6cdad9b8b64" Namespace="calico-system" Pod="goldmane-7c778bb748-hx5ms" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-goldmane--7c778bb748--hx5ms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--b9f348caa0-k8s-goldmane--7c778bb748--hx5ms-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"24a3a22d-e704-4f02-8408-ca1de6f232f0", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 54, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-b9f348caa0", ContainerID:"64c89973919f7b43473ac2139529d796db90ed440cd1749f6797c6cdad9b8b64", Pod:"goldmane-7c778bb748-hx5ms", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.34.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib85aeceb5c7", MAC:"ae:aa:a4:98:33:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:55:06.198805 containerd[1598]: 2025-11-04 23:55:06.190 [INFO][4606] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="64c89973919f7b43473ac2139529d796db90ed440cd1749f6797c6cdad9b8b64" Namespace="calico-system" Pod="goldmane-7c778bb748-hx5ms" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-goldmane--7c778bb748--hx5ms-eth0" Nov 4 23:55:06.254967 containerd[1598]: time="2025-11-04T23:55:06.254906632Z" level=info msg="connecting to shim 64c89973919f7b43473ac2139529d796db90ed440cd1749f6797c6cdad9b8b64" address="unix:///run/containerd/s/a030fe6829e5638c7dadb26644b9feb4294a8b52068ba52d055cab849d1aaf01" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:06.312022 systemd[1]: Started cri-containerd-64c89973919f7b43473ac2139529d796db90ed440cd1749f6797c6cdad9b8b64.scope - libcontainer container 64c89973919f7b43473ac2139529d796db90ed440cd1749f6797c6cdad9b8b64. Nov 4 23:55:06.325711 containerd[1598]: time="2025-11-04T23:55:06.325633428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-577ff57f97-lhcd2,Uid:6addfd64-c562-4f9f-bb9f-581ad89a73d8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a0caa316f3ce1440186cc97a69a768465cdb0947e2a7bfc7f7794a7160ea2aac\"" Nov 4 23:55:06.329350 containerd[1598]: time="2025-11-04T23:55:06.329241245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:55:06.481889 containerd[1598]: time="2025-11-04T23:55:06.481556129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-hx5ms,Uid:24a3a22d-e704-4f02-8408-ca1de6f232f0,Namespace:calico-system,Attempt:0,} returns sandbox id \"64c89973919f7b43473ac2139529d796db90ed440cd1749f6797c6cdad9b8b64\"" Nov 4 23:55:06.635513 systemd-networkd[1493]: calic3a65e57b3c: Gained IPv6LL Nov 4 23:55:06.640727 containerd[1598]: time="2025-11-04T23:55:06.640189710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2h4kv,Uid:907a3d1f-a9d8-4fa7-9529-2703403b5056,Namespace:calico-system,Attempt:0,}" Nov 4 23:55:06.668991 containerd[1598]: time="2025-11-04T23:55:06.668772160Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:06.670622 containerd[1598]: time="2025-11-04T23:55:06.670407482Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:55:06.670622 containerd[1598]: time="2025-11-04T23:55:06.670511262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:55:06.671236 kubelet[2830]: E1104 23:55:06.671060 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:55:06.671236 kubelet[2830]: E1104 23:55:06.671173 2830 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:55:06.671910 kubelet[2830]: E1104 23:55:06.671500 2830 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-577ff57f97-lhcd2_calico-apiserver(6addfd64-c562-4f9f-bb9f-581ad89a73d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:06.671910 kubelet[2830]: E1104 23:55:06.671580 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-577ff57f97-lhcd2" podUID="6addfd64-c562-4f9f-bb9f-581ad89a73d8" Nov 4 23:55:06.673206 containerd[1598]: time="2025-11-04T23:55:06.673173768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 23:55:06.735305 systemd-networkd[1493]: vxlan.calico: Link UP Nov 4 23:55:06.735323 systemd-networkd[1493]: vxlan.calico: Gained carrier Nov 4 23:55:07.007565 systemd-networkd[1493]: cali879fef4d6ad: Link UP Nov 4 23:55:07.010530 systemd-networkd[1493]: cali879fef4d6ad: Gained carrier Nov 4 23:55:07.041872 containerd[1598]: 2025-11-04 23:55:06.802 [INFO][4763] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--n--b9f348caa0-k8s-csi--node--driver--2h4kv-eth0 csi-node-driver- calico-system 907a3d1f-a9d8-4fa7-9529-2703403b5056 765 0 2025-11-04 23:54:41 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4487.0.0-n-b9f348caa0 csi-node-driver-2h4kv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali879fef4d6ad [] [] }} ContainerID="2ba1261762e49271917e588b8755433f73a9187e841084f6f007c4d3cbe505fd" Namespace="calico-system" Pod="csi-node-driver-2h4kv" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-csi--node--driver--2h4kv-" Nov 4 23:55:07.041872 containerd[1598]: 2025-11-04 23:55:06.803 [INFO][4763] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2ba1261762e49271917e588b8755433f73a9187e841084f6f007c4d3cbe505fd" Namespace="calico-system" Pod="csi-node-driver-2h4kv" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-csi--node--driver--2h4kv-eth0" Nov 4 23:55:07.041872 containerd[1598]: 2025-11-04 23:55:06.909 [INFO][4788] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ba1261762e49271917e588b8755433f73a9187e841084f6f007c4d3cbe505fd" HandleID="k8s-pod-network.2ba1261762e49271917e588b8755433f73a9187e841084f6f007c4d3cbe505fd" Workload="ci--4487.0.0--n--b9f348caa0-k8s-csi--node--driver--2h4kv-eth0" Nov 4 23:55:07.041872 containerd[1598]: 2025-11-04 23:55:06.909 [INFO][4788] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2ba1261762e49271917e588b8755433f73a9187e841084f6f007c4d3cbe505fd" HandleID="k8s-pod-network.2ba1261762e49271917e588b8755433f73a9187e841084f6f007c4d3cbe505fd" Workload="ci--4487.0.0--n--b9f348caa0-k8s-csi--node--driver--2h4kv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000275f40), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.0-n-b9f348caa0", "pod":"csi-node-driver-2h4kv", "timestamp":"2025-11-04 23:55:06.909343058 +0000 UTC"}, Hostname:"ci-4487.0.0-n-b9f348caa0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:55:07.041872 containerd[1598]: 2025-11-04 23:55:06.917 [INFO][4788] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:55:07.041872 containerd[1598]: 2025-11-04 23:55:06.917 [INFO][4788] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:55:07.041872 containerd[1598]: 2025-11-04 23:55:06.917 [INFO][4788] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-n-b9f348caa0' Nov 4 23:55:07.041872 containerd[1598]: 2025-11-04 23:55:06.935 [INFO][4788] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2ba1261762e49271917e588b8755433f73a9187e841084f6f007c4d3cbe505fd" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:07.041872 containerd[1598]: 2025-11-04 23:55:06.949 [INFO][4788] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:07.041872 containerd[1598]: 2025-11-04 23:55:06.957 [INFO][4788] ipam/ipam.go 511: Trying affinity for 192.168.34.64/26 host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:07.041872 containerd[1598]: 2025-11-04 23:55:06.961 [INFO][4788] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.64/26 host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:07.041872 containerd[1598]: 2025-11-04 23:55:06.967 [INFO][4788] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:07.041872 containerd[1598]: 2025-11-04 23:55:06.967 [INFO][4788] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.2ba1261762e49271917e588b8755433f73a9187e841084f6f007c4d3cbe505fd" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:07.041872 containerd[1598]: 2025-11-04 23:55:06.970 [INFO][4788] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2ba1261762e49271917e588b8755433f73a9187e841084f6f007c4d3cbe505fd Nov 4 23:55:07.041872 containerd[1598]: 2025-11-04 23:55:06.976 [INFO][4788] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.2ba1261762e49271917e588b8755433f73a9187e841084f6f007c4d3cbe505fd" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:07.041872 containerd[1598]: 2025-11-04 23:55:06.989 [INFO][4788] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.72/26] block=192.168.34.64/26 handle="k8s-pod-network.2ba1261762e49271917e588b8755433f73a9187e841084f6f007c4d3cbe505fd" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:07.041872 containerd[1598]: 2025-11-04 23:55:06.989 [INFO][4788] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.72/26] handle="k8s-pod-network.2ba1261762e49271917e588b8755433f73a9187e841084f6f007c4d3cbe505fd" host="ci-4487.0.0-n-b9f348caa0" Nov 4 23:55:07.041872 containerd[1598]: 2025-11-04 23:55:06.989 [INFO][4788] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:55:07.041872 containerd[1598]: 2025-11-04 23:55:06.989 [INFO][4788] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.72/26] IPv6=[] ContainerID="2ba1261762e49271917e588b8755433f73a9187e841084f6f007c4d3cbe505fd" HandleID="k8s-pod-network.2ba1261762e49271917e588b8755433f73a9187e841084f6f007c4d3cbe505fd" Workload="ci--4487.0.0--n--b9f348caa0-k8s-csi--node--driver--2h4kv-eth0" Nov 4 23:55:07.044243 containerd[1598]: 2025-11-04 23:55:06.997 [INFO][4763] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2ba1261762e49271917e588b8755433f73a9187e841084f6f007c4d3cbe505fd" Namespace="calico-system" Pod="csi-node-driver-2h4kv" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-csi--node--driver--2h4kv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--b9f348caa0-k8s-csi--node--driver--2h4kv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"907a3d1f-a9d8-4fa7-9529-2703403b5056", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 54, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-b9f348caa0", ContainerID:"", Pod:"csi-node-driver-2h4kv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali879fef4d6ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:55:07.044243 containerd[1598]: 2025-11-04 23:55:06.997 [INFO][4763] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.72/32] ContainerID="2ba1261762e49271917e588b8755433f73a9187e841084f6f007c4d3cbe505fd" Namespace="calico-system" Pod="csi-node-driver-2h4kv" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-csi--node--driver--2h4kv-eth0" Nov 4 23:55:07.044243 containerd[1598]: 2025-11-04 23:55:06.997 [INFO][4763] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali879fef4d6ad ContainerID="2ba1261762e49271917e588b8755433f73a9187e841084f6f007c4d3cbe505fd" Namespace="calico-system" Pod="csi-node-driver-2h4kv" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-csi--node--driver--2h4kv-eth0" Nov 4 23:55:07.044243 containerd[1598]: 2025-11-04 23:55:07.014 [INFO][4763] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ba1261762e49271917e588b8755433f73a9187e841084f6f007c4d3cbe505fd" Namespace="calico-system" Pod="csi-node-driver-2h4kv" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-csi--node--driver--2h4kv-eth0" Nov 4 23:55:07.044243 containerd[1598]: 2025-11-04 23:55:07.017 [INFO][4763] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2ba1261762e49271917e588b8755433f73a9187e841084f6f007c4d3cbe505fd" Namespace="calico-system" Pod="csi-node-driver-2h4kv" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-csi--node--driver--2h4kv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--b9f348caa0-k8s-csi--node--driver--2h4kv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"907a3d1f-a9d8-4fa7-9529-2703403b5056", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 54, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-b9f348caa0", ContainerID:"2ba1261762e49271917e588b8755433f73a9187e841084f6f007c4d3cbe505fd", Pod:"csi-node-driver-2h4kv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali879fef4d6ad", MAC:"2a:63:2b:aa:b0:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:55:07.044243 containerd[1598]: 2025-11-04 23:55:07.033 [INFO][4763] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2ba1261762e49271917e588b8755433f73a9187e841084f6f007c4d3cbe505fd" Namespace="calico-system" Pod="csi-node-driver-2h4kv" WorkloadEndpoint="ci--4487.0.0--n--b9f348caa0-k8s-csi--node--driver--2h4kv-eth0" Nov 4 23:55:07.051310 containerd[1598]: time="2025-11-04T23:55:07.051154968Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:07.056707 containerd[1598]: time="2025-11-04T23:55:07.056387094Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 23:55:07.056707 containerd[1598]: time="2025-11-04T23:55:07.056504340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 4 23:55:07.057621 kubelet[2830]: E1104 23:55:07.057348 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:55:07.057621 kubelet[2830]: E1104 23:55:07.057407 2830 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:55:07.059134 kubelet[2830]: E1104 23:55:07.057978 2830 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-hx5ms_calico-system(24a3a22d-e704-4f02-8408-ca1de6f232f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:07.059134 kubelet[2830]: E1104 23:55:07.058028 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-hx5ms" podUID="24a3a22d-e704-4f02-8408-ca1de6f232f0" Nov 4 23:55:07.065335 kubelet[2830]: E1104 23:55:07.065244 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-577ff57f97-lhcd2" podUID="6addfd64-c562-4f9f-bb9f-581ad89a73d8" Nov 4 23:55:07.065901 kubelet[2830]: E1104 23:55:07.065583 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-577ff57f97-8frfn" podUID="72fd9e54-497f-4204-80c1-9f81d06cb75e" Nov 4 23:55:07.119361 containerd[1598]: time="2025-11-04T23:55:07.119193078Z" level=info msg="connecting to shim 2ba1261762e49271917e588b8755433f73a9187e841084f6f007c4d3cbe505fd" address="unix:///run/containerd/s/f2b5da3919ef8bb3fa43ac9374df003c30412d8b17260e0c53fb94b6aeb0e943" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:07.195970 systemd[1]: Started cri-containerd-2ba1261762e49271917e588b8755433f73a9187e841084f6f007c4d3cbe505fd.scope - libcontainer container 2ba1261762e49271917e588b8755433f73a9187e841084f6f007c4d3cbe505fd. Nov 4 23:55:07.351341 containerd[1598]: time="2025-11-04T23:55:07.351212725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2h4kv,Uid:907a3d1f-a9d8-4fa7-9529-2703403b5056,Namespace:calico-system,Attempt:0,} returns sandbox id \"2ba1261762e49271917e588b8755433f73a9187e841084f6f007c4d3cbe505fd\"" Nov 4 23:55:07.356847 containerd[1598]: time="2025-11-04T23:55:07.356757006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 23:55:07.593972 systemd-networkd[1493]: calib85aeceb5c7: Gained IPv6LL Nov 4 23:55:07.803909 containerd[1598]: time="2025-11-04T23:55:07.803855712Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:07.804881 containerd[1598]: time="2025-11-04T23:55:07.804789133Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 23:55:07.805018 containerd[1598]: time="2025-11-04T23:55:07.804936643Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 4 23:55:07.805233 kubelet[2830]: E1104 23:55:07.805166 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:55:07.805397 kubelet[2830]: E1104 23:55:07.805240 2830 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:55:07.805892 kubelet[2830]: E1104 23:55:07.805853 2830 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-2h4kv_calico-system(907a3d1f-a9d8-4fa7-9529-2703403b5056): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:07.809530 containerd[1598]: time="2025-11-04T23:55:07.809480160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 23:55:07.914926 systemd-networkd[1493]: cali6062d9cd83e: Gained IPv6LL Nov 4 23:55:08.075820 kubelet[2830]: E1104 23:55:08.075584 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-577ff57f97-lhcd2" podUID="6addfd64-c562-4f9f-bb9f-581ad89a73d8" Nov 4 23:55:08.076778 kubelet[2830]: E1104 23:55:08.076074 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-hx5ms" podUID="24a3a22d-e704-4f02-8408-ca1de6f232f0" Nov 4 23:55:08.105961 systemd-networkd[1493]: vxlan.calico: Gained IPv6LL Nov 4 23:55:08.146893 containerd[1598]: time="2025-11-04T23:55:08.146740102Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:08.149913 containerd[1598]: time="2025-11-04T23:55:08.149654678Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 23:55:08.150198 containerd[1598]: time="2025-11-04T23:55:08.149853709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 4 23:55:08.150507 kubelet[2830]: E1104 23:55:08.150215 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:55:08.150507 kubelet[2830]: E1104 23:55:08.150437 2830 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:55:08.150758 kubelet[2830]: E1104 23:55:08.150546 2830 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-2h4kv_calico-system(907a3d1f-a9d8-4fa7-9529-2703403b5056): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:08.150758 kubelet[2830]: E1104 23:55:08.150606 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2h4kv" podUID="907a3d1f-a9d8-4fa7-9529-2703403b5056" Nov 4 23:55:08.297875 systemd-networkd[1493]: cali879fef4d6ad: Gained IPv6LL Nov 4 23:55:09.082518 kubelet[2830]: E1104 23:55:09.082412 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2h4kv" podUID="907a3d1f-a9d8-4fa7-9529-2703403b5056" Nov 4 23:55:10.894722 systemd[1]: Started sshd@9-137.184.235.85:22-139.178.89.65:52200.service - OpenSSH per-connection server daemon (139.178.89.65:52200). Nov 4 23:55:11.082530 sshd[4903]: Accepted publickey for core from 139.178.89.65 port 52200 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:11.085943 sshd-session[4903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:11.095997 systemd-logind[1570]: New session 10 of user core. Nov 4 23:55:11.101969 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 4 23:55:11.695066 sshd[4909]: Connection closed by 139.178.89.65 port 52200 Nov 4 23:55:11.694168 sshd-session[4903]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:11.702372 systemd[1]: sshd@9-137.184.235.85:22-139.178.89.65:52200.service: Deactivated successfully. Nov 4 23:55:11.706175 systemd[1]: session-10.scope: Deactivated successfully. Nov 4 23:55:11.708583 systemd-logind[1570]: Session 10 logged out. Waiting for processes to exit. Nov 4 23:55:11.711526 systemd-logind[1570]: Removed session 10. Nov 4 23:55:16.635971 containerd[1598]: time="2025-11-04T23:55:16.635718784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 23:55:16.716950 systemd[1]: Started sshd@10-137.184.235.85:22-139.178.89.65:56270.service - OpenSSH per-connection server daemon (139.178.89.65:56270). Nov 4 23:55:16.807700 sshd[4934]: Accepted publickey for core from 139.178.89.65 port 56270 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:16.809986 sshd-session[4934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:16.815871 systemd-logind[1570]: New session 11 of user core. Nov 4 23:55:16.823968 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 4 23:55:16.941720 containerd[1598]: time="2025-11-04T23:55:16.940899132Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:16.944616 containerd[1598]: time="2025-11-04T23:55:16.944410472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 4 23:55:16.944616 containerd[1598]: time="2025-11-04T23:55:16.944469289Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 23:55:16.946152 kubelet[2830]: E1104 23:55:16.945405 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:55:16.946152 kubelet[2830]: E1104 23:55:16.945454 2830 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:55:16.946152 kubelet[2830]: E1104 23:55:16.945558 2830 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-9bcffb49f-rncd9_calico-system(b6a0e661-70b0-458b-8d61-43e16ce05a61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:16.950041 containerd[1598]: time="2025-11-04T23:55:16.948593797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 23:55:17.027097 sshd[4937]: Connection closed by 139.178.89.65 port 56270 Nov 4 23:55:17.028036 sshd-session[4934]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:17.032491 systemd-logind[1570]: Session 11 logged out. Waiting for processes to exit. Nov 4 23:55:17.032828 systemd[1]: sshd@10-137.184.235.85:22-139.178.89.65:56270.service: Deactivated successfully. Nov 4 23:55:17.037069 systemd[1]: session-11.scope: Deactivated successfully. Nov 4 23:55:17.041361 systemd-logind[1570]: Removed session 11. Nov 4 23:55:17.285248 containerd[1598]: time="2025-11-04T23:55:17.285099297Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:17.286881 containerd[1598]: time="2025-11-04T23:55:17.286799176Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 23:55:17.287080 containerd[1598]: time="2025-11-04T23:55:17.286930213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 4 23:55:17.287342 kubelet[2830]: E1104 23:55:17.287288 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:55:17.287441 kubelet[2830]: E1104 23:55:17.287348 2830 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:55:17.287875 kubelet[2830]: E1104 23:55:17.287443 2830 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-9bcffb49f-rncd9_calico-system(b6a0e661-70b0-458b-8d61-43e16ce05a61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:17.287875 kubelet[2830]: E1104 23:55:17.287494 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9bcffb49f-rncd9" podUID="b6a0e661-70b0-458b-8d61-43e16ce05a61" Nov 4 23:55:18.685733 containerd[1598]: time="2025-11-04T23:55:18.685633237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 23:55:19.022939 containerd[1598]: time="2025-11-04T23:55:19.022702158Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:19.023578 containerd[1598]: time="2025-11-04T23:55:19.023511070Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 23:55:19.023753 containerd[1598]: time="2025-11-04T23:55:19.023560433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 4 23:55:19.023833 kubelet[2830]: E1104 23:55:19.023791 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:55:19.024308 kubelet[2830]: E1104 23:55:19.023852 2830 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:55:19.024308 kubelet[2830]: E1104 23:55:19.023943 2830 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7bc8f8875-8jrl6_calico-system(92120656-4e9c-41d6-aa85-513f1a7aea60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:19.024308 kubelet[2830]: E1104 23:55:19.023978 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bc8f8875-8jrl6" podUID="92120656-4e9c-41d6-aa85-513f1a7aea60" Nov 4 23:55:19.642883 containerd[1598]: time="2025-11-04T23:55:19.642753252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 23:55:19.983792 containerd[1598]: time="2025-11-04T23:55:19.983622056Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:19.984775 containerd[1598]: time="2025-11-04T23:55:19.984636835Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 23:55:19.984955 containerd[1598]: time="2025-11-04T23:55:19.984719069Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 4 23:55:19.985151 kubelet[2830]: E1104 23:55:19.985101 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:55:19.985273 kubelet[2830]: E1104 23:55:19.985152 2830 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:55:19.985273 kubelet[2830]: E1104 23:55:19.985229 2830 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-hx5ms_calico-system(24a3a22d-e704-4f02-8408-ca1de6f232f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:19.985382 kubelet[2830]: E1104 23:55:19.985288 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-hx5ms" podUID="24a3a22d-e704-4f02-8408-ca1de6f232f0" Nov 4 23:55:21.638777 containerd[1598]: time="2025-11-04T23:55:21.638677553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:55:21.952239 containerd[1598]: time="2025-11-04T23:55:21.952179747Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:21.952841 containerd[1598]: time="2025-11-04T23:55:21.952796336Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:55:21.952961 containerd[1598]: time="2025-11-04T23:55:21.952930840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:55:21.953349 kubelet[2830]: E1104 23:55:21.953140 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:55:21.953349 kubelet[2830]: E1104 23:55:21.953301 2830 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:55:21.954406 kubelet[2830]: E1104 23:55:21.953594 2830 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-577ff57f97-8frfn_calico-apiserver(72fd9e54-497f-4204-80c1-9f81d06cb75e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:21.954406 kubelet[2830]: E1104 23:55:21.953636 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-577ff57f97-8frfn" podUID="72fd9e54-497f-4204-80c1-9f81d06cb75e" Nov 4 23:55:22.042899 systemd[1]: Started sshd@11-137.184.235.85:22-139.178.89.65:56276.service - OpenSSH per-connection server daemon (139.178.89.65:56276). Nov 4 23:55:22.125058 sshd[4960]: Accepted publickey for core from 139.178.89.65 port 56276 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:22.127037 sshd-session[4960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:22.133074 systemd-logind[1570]: New session 12 of user core. Nov 4 23:55:22.139078 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 4 23:55:22.295243 sshd[4963]: Connection closed by 139.178.89.65 port 56276 Nov 4 23:55:22.299105 sshd-session[4960]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:22.308118 systemd[1]: sshd@11-137.184.235.85:22-139.178.89.65:56276.service: Deactivated successfully. Nov 4 23:55:22.311079 systemd[1]: session-12.scope: Deactivated successfully. Nov 4 23:55:22.313362 systemd-logind[1570]: Session 12 logged out. Waiting for processes to exit. Nov 4 23:55:22.315725 systemd-logind[1570]: Removed session 12. Nov 4 23:55:22.636559 containerd[1598]: time="2025-11-04T23:55:22.635826367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:55:22.936417 containerd[1598]: time="2025-11-04T23:55:22.936169760Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:22.938242 containerd[1598]: time="2025-11-04T23:55:22.937922186Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:55:22.938242 containerd[1598]: time="2025-11-04T23:55:22.938057023Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:55:22.938401 kubelet[2830]: E1104 23:55:22.938359 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:55:22.938527 kubelet[2830]: E1104 23:55:22.938411 2830 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:55:22.940544 kubelet[2830]: E1104 23:55:22.940007 2830 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-577ff57f97-lhcd2_calico-apiserver(6addfd64-c562-4f9f-bb9f-581ad89a73d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:22.940544 kubelet[2830]: E1104 23:55:22.940077 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-577ff57f97-lhcd2" podUID="6addfd64-c562-4f9f-bb9f-581ad89a73d8" Nov 4 23:55:22.941012 containerd[1598]: time="2025-11-04T23:55:22.940972198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 23:55:23.268482 containerd[1598]: time="2025-11-04T23:55:23.268328949Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:23.269631 containerd[1598]: time="2025-11-04T23:55:23.269498382Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 23:55:23.269631 containerd[1598]: time="2025-11-04T23:55:23.269576158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 4 23:55:23.269964 kubelet[2830]: E1104 23:55:23.269892 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:55:23.270476 kubelet[2830]: E1104 23:55:23.269965 2830 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:55:23.270476 kubelet[2830]: E1104 23:55:23.270067 2830 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-2h4kv_calico-system(907a3d1f-a9d8-4fa7-9529-2703403b5056): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:23.273562 containerd[1598]: time="2025-11-04T23:55:23.273529638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 23:55:23.587019 containerd[1598]: time="2025-11-04T23:55:23.586858967Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:23.587818 containerd[1598]: time="2025-11-04T23:55:23.587760626Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 23:55:23.587939 containerd[1598]: time="2025-11-04T23:55:23.587883281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 4 23:55:23.588333 kubelet[2830]: E1104 23:55:23.588237 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:55:23.588537 kubelet[2830]: E1104 23:55:23.588312 2830 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:55:23.588803 kubelet[2830]: E1104 23:55:23.588670 2830 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-2h4kv_calico-system(907a3d1f-a9d8-4fa7-9529-2703403b5056): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:23.588803 kubelet[2830]: E1104 23:55:23.588730 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2h4kv" podUID="907a3d1f-a9d8-4fa7-9529-2703403b5056" Nov 4 23:55:27.313139 systemd[1]: Started sshd@12-137.184.235.85:22-139.178.89.65:53954.service - OpenSSH per-connection server daemon (139.178.89.65:53954). Nov 4 23:55:27.409627 sshd[4979]: Accepted publickey for core from 139.178.89.65 port 53954 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:27.412498 sshd-session[4979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:27.423870 systemd-logind[1570]: New session 13 of user core. Nov 4 23:55:27.437241 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 4 23:55:27.622828 sshd[4982]: Connection closed by 139.178.89.65 port 53954 Nov 4 23:55:27.624877 sshd-session[4979]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:27.638322 systemd[1]: sshd@12-137.184.235.85:22-139.178.89.65:53954.service: Deactivated successfully. Nov 4 23:55:27.641190 systemd[1]: session-13.scope: Deactivated successfully. Nov 4 23:55:27.644118 systemd-logind[1570]: Session 13 logged out. Waiting for processes to exit. Nov 4 23:55:27.647057 systemd-logind[1570]: Removed session 13. Nov 4 23:55:27.648812 systemd[1]: Started sshd@13-137.184.235.85:22-139.178.89.65:53966.service - OpenSSH per-connection server daemon (139.178.89.65:53966). Nov 4 23:55:27.725729 sshd[4995]: Accepted publickey for core from 139.178.89.65 port 53966 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:27.728284 sshd-session[4995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:27.735785 systemd-logind[1570]: New session 14 of user core. Nov 4 23:55:27.747957 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 4 23:55:27.982687 sshd[4998]: Connection closed by 139.178.89.65 port 53966 Nov 4 23:55:27.986546 sshd-session[4995]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:27.994481 systemd[1]: sshd@13-137.184.235.85:22-139.178.89.65:53966.service: Deactivated successfully. Nov 4 23:55:27.998578 systemd[1]: session-14.scope: Deactivated successfully. Nov 4 23:55:28.002370 systemd-logind[1570]: Session 14 logged out. Waiting for processes to exit. Nov 4 23:55:28.007483 systemd[1]: Started sshd@14-137.184.235.85:22-139.178.89.65:53978.service - OpenSSH per-connection server daemon (139.178.89.65:53978). Nov 4 23:55:28.010127 systemd-logind[1570]: Removed session 14. Nov 4 23:55:28.130723 sshd[5016]: Accepted publickey for core from 139.178.89.65 port 53978 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:28.132359 sshd-session[5016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:28.139367 systemd-logind[1570]: New session 15 of user core. Nov 4 23:55:28.143012 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 4 23:55:28.311281 sshd[5019]: Connection closed by 139.178.89.65 port 53978 Nov 4 23:55:28.310452 sshd-session[5016]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:28.315621 systemd[1]: sshd@14-137.184.235.85:22-139.178.89.65:53978.service: Deactivated successfully. Nov 4 23:55:28.319319 systemd[1]: session-15.scope: Deactivated successfully. Nov 4 23:55:28.323943 systemd-logind[1570]: Session 15 logged out. Waiting for processes to exit. Nov 4 23:55:28.325205 systemd-logind[1570]: Removed session 15. Nov 4 23:55:30.041165 containerd[1598]: time="2025-11-04T23:55:30.041024280Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1469d86dbea8af5395702340570ad22a88bd77f9275b86366231ca08705f452c\" id:\"744f021fd93345cfcdce9bb505dfbf28b67d573a2b5c8ab69b6f7142dbd4548f\" pid:5042 exited_at:{seconds:1762300530 nanos:39934687}" Nov 4 23:55:30.044955 kubelet[2830]: E1104 23:55:30.044904 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:55:30.634936 kubelet[2830]: E1104 23:55:30.634888 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bc8f8875-8jrl6" podUID="92120656-4e9c-41d6-aa85-513f1a7aea60" Nov 4 23:55:30.636306 kubelet[2830]: E1104 23:55:30.635399 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9bcffb49f-rncd9" podUID="b6a0e661-70b0-458b-8d61-43e16ce05a61" Nov 4 23:55:31.640346 kubelet[2830]: E1104 23:55:31.640233 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:55:33.326276 systemd[1]: Started sshd@15-137.184.235.85:22-139.178.89.65:53984.service - OpenSSH per-connection server daemon (139.178.89.65:53984). Nov 4 23:55:33.423809 sshd[5058]: Accepted publickey for core from 139.178.89.65 port 53984 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:33.426852 sshd-session[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:33.432650 systemd-logind[1570]: New session 16 of user core. Nov 4 23:55:33.443990 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 4 23:55:33.626622 sshd[5061]: Connection closed by 139.178.89.65 port 53984 Nov 4 23:55:33.628312 sshd-session[5058]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:33.635196 systemd[1]: sshd@15-137.184.235.85:22-139.178.89.65:53984.service: Deactivated successfully. Nov 4 23:55:33.638572 systemd[1]: session-16.scope: Deactivated successfully. Nov 4 23:55:33.639914 systemd-logind[1570]: Session 16 logged out. Waiting for processes to exit. Nov 4 23:55:33.641760 systemd-logind[1570]: Removed session 16. Nov 4 23:55:34.634224 kubelet[2830]: E1104 23:55:34.634110 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-hx5ms" podUID="24a3a22d-e704-4f02-8408-ca1de6f232f0" Nov 4 23:55:35.634676 kubelet[2830]: E1104 23:55:35.633632 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-577ff57f97-lhcd2" podUID="6addfd64-c562-4f9f-bb9f-581ad89a73d8" Nov 4 23:55:35.635455 kubelet[2830]: E1104 23:55:35.634925 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-577ff57f97-8frfn" podUID="72fd9e54-497f-4204-80c1-9f81d06cb75e" Nov 4 23:55:36.634639 kubelet[2830]: E1104 23:55:36.633056 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:55:38.634064 kubelet[2830]: E1104 23:55:38.633996 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:55:38.638794 kubelet[2830]: E1104 23:55:38.638619 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2h4kv" podUID="907a3d1f-a9d8-4fa7-9529-2703403b5056" Nov 4 23:55:38.645422 systemd[1]: Started sshd@16-137.184.235.85:22-139.178.89.65:33064.service - OpenSSH per-connection server daemon (139.178.89.65:33064). Nov 4 23:55:38.766870 sshd[5078]: Accepted publickey for core from 139.178.89.65 port 33064 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:38.769958 sshd-session[5078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:38.776721 systemd-logind[1570]: New session 17 of user core. Nov 4 23:55:38.787957 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 4 23:55:38.964135 sshd[5081]: Connection closed by 139.178.89.65 port 33064 Nov 4 23:55:38.965138 sshd-session[5078]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:38.972300 systemd[1]: sshd@16-137.184.235.85:22-139.178.89.65:33064.service: Deactivated successfully. Nov 4 23:55:38.976151 systemd[1]: session-17.scope: Deactivated successfully. Nov 4 23:55:38.980401 systemd-logind[1570]: Session 17 logged out. Waiting for processes to exit. Nov 4 23:55:38.982322 systemd-logind[1570]: Removed session 17. Nov 4 23:55:43.636335 containerd[1598]: time="2025-11-04T23:55:43.636251777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 23:55:43.980185 systemd[1]: Started sshd@17-137.184.235.85:22-139.178.89.65:33078.service - OpenSSH per-connection server daemon (139.178.89.65:33078). Nov 4 23:55:43.992143 containerd[1598]: time="2025-11-04T23:55:43.991945610Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:43.993080 containerd[1598]: time="2025-11-04T23:55:43.993033789Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 23:55:43.993191 containerd[1598]: time="2025-11-04T23:55:43.993136712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 4 23:55:43.993673 kubelet[2830]: E1104 23:55:43.993460 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:55:43.993673 kubelet[2830]: E1104 23:55:43.993516 2830 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:55:43.994240 kubelet[2830]: E1104 23:55:43.993819 2830 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-9bcffb49f-rncd9_calico-system(b6a0e661-70b0-458b-8d61-43e16ce05a61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:43.998649 containerd[1598]: time="2025-11-04T23:55:43.998210505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 23:55:44.131116 sshd[5094]: Accepted publickey for core from 139.178.89.65 port 33078 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:44.134212 sshd-session[5094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:44.142497 systemd-logind[1570]: New session 18 of user core. Nov 4 23:55:44.145957 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 4 23:55:44.323732 containerd[1598]: time="2025-11-04T23:55:44.322553674Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:44.323732 containerd[1598]: time="2025-11-04T23:55:44.323581519Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 23:55:44.324163 containerd[1598]: time="2025-11-04T23:55:44.323719332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 4 23:55:44.324236 kubelet[2830]: E1104 23:55:44.324052 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:55:44.324236 kubelet[2830]: E1104 23:55:44.324123 2830 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:55:44.324648 kubelet[2830]: E1104 23:55:44.324407 2830 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-9bcffb49f-rncd9_calico-system(b6a0e661-70b0-458b-8d61-43e16ce05a61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:44.325585 kubelet[2830]: E1104 23:55:44.324729 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9bcffb49f-rncd9" podUID="b6a0e661-70b0-458b-8d61-43e16ce05a61" Nov 4 23:55:44.361338 sshd[5097]: Connection closed by 139.178.89.65 port 33078 Nov 4 23:55:44.362275 sshd-session[5094]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:44.376495 systemd[1]: sshd@17-137.184.235.85:22-139.178.89.65:33078.service: Deactivated successfully. Nov 4 23:55:44.380619 systemd[1]: session-18.scope: Deactivated successfully. Nov 4 23:55:44.383925 systemd-logind[1570]: Session 18 logged out. Waiting for processes to exit. Nov 4 23:55:44.390634 systemd[1]: Started sshd@18-137.184.235.85:22-139.178.89.65:33094.service - OpenSSH per-connection server daemon (139.178.89.65:33094). Nov 4 23:55:44.391951 systemd-logind[1570]: Removed session 18. Nov 4 23:55:44.475229 sshd[5109]: Accepted publickey for core from 139.178.89.65 port 33094 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:44.476965 sshd-session[5109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:44.483749 systemd-logind[1570]: New session 19 of user core. Nov 4 23:55:44.488966 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 4 23:55:44.636947 containerd[1598]: time="2025-11-04T23:55:44.636809492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 23:55:44.931706 sshd[5112]: Connection closed by 139.178.89.65 port 33094 Nov 4 23:55:44.933024 sshd-session[5109]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:44.944897 systemd[1]: sshd@18-137.184.235.85:22-139.178.89.65:33094.service: Deactivated successfully. Nov 4 23:55:44.947970 systemd[1]: session-19.scope: Deactivated successfully. Nov 4 23:55:44.949109 systemd-logind[1570]: Session 19 logged out. Waiting for processes to exit. Nov 4 23:55:44.955484 systemd[1]: Started sshd@19-137.184.235.85:22-139.178.89.65:33106.service - OpenSSH per-connection server daemon (139.178.89.65:33106). Nov 4 23:55:44.957220 systemd-logind[1570]: Removed session 19. Nov 4 23:55:45.002800 containerd[1598]: time="2025-11-04T23:55:45.002754005Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:45.004001 containerd[1598]: time="2025-11-04T23:55:45.003904379Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 23:55:45.004183 containerd[1598]: time="2025-11-04T23:55:45.004149846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 4 23:55:45.004867 kubelet[2830]: E1104 23:55:45.004654 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:55:45.005410 kubelet[2830]: E1104 23:55:45.004888 2830 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:55:45.006210 kubelet[2830]: E1104 23:55:45.005800 2830 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7bc8f8875-8jrl6_calico-system(92120656-4e9c-41d6-aa85-513f1a7aea60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:45.006210 kubelet[2830]: E1104 23:55:45.005916 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bc8f8875-8jrl6" podUID="92120656-4e9c-41d6-aa85-513f1a7aea60" Nov 4 23:55:45.055867 sshd[5122]: Accepted publickey for core from 139.178.89.65 port 33106 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:45.059438 sshd-session[5122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:45.067349 systemd-logind[1570]: New session 20 of user core. Nov 4 23:55:45.074006 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 4 23:55:45.639955 containerd[1598]: time="2025-11-04T23:55:45.638637314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 23:55:45.982943 sshd[5125]: Connection closed by 139.178.89.65 port 33106 Nov 4 23:55:45.983376 sshd-session[5122]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:45.990629 containerd[1598]: time="2025-11-04T23:55:45.990570004Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:45.991895 containerd[1598]: time="2025-11-04T23:55:45.991611569Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 23:55:45.992007 containerd[1598]: time="2025-11-04T23:55:45.991877755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 4 23:55:45.992301 kubelet[2830]: E1104 23:55:45.992252 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:55:45.992431 kubelet[2830]: E1104 23:55:45.992315 2830 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:55:45.992569 kubelet[2830]: E1104 23:55:45.992423 2830 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-hx5ms_calico-system(24a3a22d-e704-4f02-8408-ca1de6f232f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:45.992569 kubelet[2830]: E1104 23:55:45.992468 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-hx5ms" podUID="24a3a22d-e704-4f02-8408-ca1de6f232f0" Nov 4 23:55:45.999492 systemd[1]: sshd@19-137.184.235.85:22-139.178.89.65:33106.service: Deactivated successfully. Nov 4 23:55:46.005891 systemd[1]: session-20.scope: Deactivated successfully. Nov 4 23:55:46.009928 systemd-logind[1570]: Session 20 logged out. Waiting for processes to exit. Nov 4 23:55:46.017194 systemd[1]: Started sshd@20-137.184.235.85:22-139.178.89.65:55754.service - OpenSSH per-connection server daemon (139.178.89.65:55754). Nov 4 23:55:46.023546 systemd-logind[1570]: Removed session 20. Nov 4 23:55:46.125006 sshd[5141]: Accepted publickey for core from 139.178.89.65 port 55754 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:46.128039 sshd-session[5141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:46.140312 systemd-logind[1570]: New session 21 of user core. Nov 4 23:55:46.148033 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 4 23:55:46.729487 sshd[5145]: Connection closed by 139.178.89.65 port 55754 Nov 4 23:55:46.732552 sshd-session[5141]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:46.750132 systemd[1]: Started sshd@21-137.184.235.85:22-139.178.89.65:55766.service - OpenSSH per-connection server daemon (139.178.89.65:55766). Nov 4 23:55:46.751248 systemd[1]: sshd@20-137.184.235.85:22-139.178.89.65:55754.service: Deactivated successfully. Nov 4 23:55:46.754415 systemd[1]: session-21.scope: Deactivated successfully. Nov 4 23:55:46.757525 systemd-logind[1570]: Session 21 logged out. Waiting for processes to exit. Nov 4 23:55:46.764307 systemd-logind[1570]: Removed session 21. Nov 4 23:55:46.864356 sshd[5151]: Accepted publickey for core from 139.178.89.65 port 55766 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:46.867826 sshd-session[5151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:46.879348 systemd-logind[1570]: New session 22 of user core. Nov 4 23:55:46.883052 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 4 23:55:47.055801 sshd[5157]: Connection closed by 139.178.89.65 port 55766 Nov 4 23:55:47.056692 sshd-session[5151]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:47.064361 systemd[1]: sshd@21-137.184.235.85:22-139.178.89.65:55766.service: Deactivated successfully. Nov 4 23:55:47.064418 systemd-logind[1570]: Session 22 logged out. Waiting for processes to exit. Nov 4 23:55:47.068334 systemd[1]: session-22.scope: Deactivated successfully. Nov 4 23:55:47.072493 systemd-logind[1570]: Removed session 22. Nov 4 23:55:48.635106 containerd[1598]: time="2025-11-04T23:55:48.634999039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:55:48.972258 containerd[1598]: time="2025-11-04T23:55:48.972197057Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:48.972907 containerd[1598]: time="2025-11-04T23:55:48.972870424Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:55:48.973072 containerd[1598]: time="2025-11-04T23:55:48.972958944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:55:48.973214 kubelet[2830]: E1104 23:55:48.973156 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:55:48.973607 kubelet[2830]: E1104 23:55:48.973232 2830 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:55:48.973607 kubelet[2830]: E1104 23:55:48.973338 2830 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-577ff57f97-lhcd2_calico-apiserver(6addfd64-c562-4f9f-bb9f-581ad89a73d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:48.973607 kubelet[2830]: E1104 23:55:48.973381 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-577ff57f97-lhcd2" podUID="6addfd64-c562-4f9f-bb9f-581ad89a73d8" Nov 4 23:55:49.633999 containerd[1598]: time="2025-11-04T23:55:49.633800581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:55:49.977514 containerd[1598]: time="2025-11-04T23:55:49.977342108Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:49.978168 containerd[1598]: time="2025-11-04T23:55:49.978076518Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:55:49.978307 containerd[1598]: time="2025-11-04T23:55:49.978209563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:55:49.979068 kubelet[2830]: E1104 23:55:49.978766 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:55:49.979068 kubelet[2830]: E1104 23:55:49.978845 2830 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:55:49.979068 kubelet[2830]: E1104 23:55:49.978967 2830 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-577ff57f97-8frfn_calico-apiserver(72fd9e54-497f-4204-80c1-9f81d06cb75e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:49.979068 kubelet[2830]: E1104 23:55:49.979009 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-577ff57f97-8frfn" podUID="72fd9e54-497f-4204-80c1-9f81d06cb75e" Nov 4 23:55:51.633250 kubelet[2830]: E1104 23:55:51.633178 2830 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 4 23:55:52.073434 systemd[1]: Started sshd@22-137.184.235.85:22-139.178.89.65:55772.service - OpenSSH per-connection server daemon (139.178.89.65:55772). Nov 4 23:55:52.186830 sshd[5181]: Accepted publickey for core from 139.178.89.65 port 55772 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:52.189254 sshd-session[5181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:52.196760 systemd-logind[1570]: New session 23 of user core. Nov 4 23:55:52.204044 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 4 23:55:52.360369 sshd[5184]: Connection closed by 139.178.89.65 port 55772 Nov 4 23:55:52.361348 sshd-session[5181]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:52.368952 systemd[1]: sshd@22-137.184.235.85:22-139.178.89.65:55772.service: Deactivated successfully. Nov 4 23:55:52.372594 systemd[1]: session-23.scope: Deactivated successfully. Nov 4 23:55:52.374034 systemd-logind[1570]: Session 23 logged out. Waiting for processes to exit. Nov 4 23:55:52.376108 systemd-logind[1570]: Removed session 23. Nov 4 23:55:52.639026 containerd[1598]: time="2025-11-04T23:55:52.638298364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 23:55:52.949002 containerd[1598]: time="2025-11-04T23:55:52.948819823Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:52.949780 containerd[1598]: time="2025-11-04T23:55:52.949632646Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 23:55:52.949780 containerd[1598]: time="2025-11-04T23:55:52.949687878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 4 23:55:52.949971 kubelet[2830]: E1104 23:55:52.949904 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:55:52.950300 kubelet[2830]: E1104 23:55:52.949982 2830 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:55:52.950300 kubelet[2830]: E1104 23:55:52.950055 2830 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-2h4kv_calico-system(907a3d1f-a9d8-4fa7-9529-2703403b5056): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:52.952031 containerd[1598]: time="2025-11-04T23:55:52.951774401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 23:55:53.267077 containerd[1598]: time="2025-11-04T23:55:53.266944660Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:53.268461 containerd[1598]: time="2025-11-04T23:55:53.268168274Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 23:55:53.268461 containerd[1598]: time="2025-11-04T23:55:53.268170316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 4 23:55:53.269381 kubelet[2830]: E1104 23:55:53.268776 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:55:53.269381 kubelet[2830]: E1104 23:55:53.268826 2830 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:55:53.269381 kubelet[2830]: E1104 23:55:53.268909 2830 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-2h4kv_calico-system(907a3d1f-a9d8-4fa7-9529-2703403b5056): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:53.269644 kubelet[2830]: E1104 23:55:53.268962 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2h4kv" podUID="907a3d1f-a9d8-4fa7-9529-2703403b5056" Nov 4 23:55:57.378205 systemd[1]: Started sshd@23-137.184.235.85:22-139.178.89.65:42260.service - OpenSSH per-connection server daemon (139.178.89.65:42260). Nov 4 23:55:57.476097 sshd[5198]: Accepted publickey for core from 139.178.89.65 port 42260 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:57.478968 sshd-session[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:57.485502 systemd-logind[1570]: New session 24 of user core. Nov 4 23:55:57.494991 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 4 23:55:57.634551 kubelet[2830]: E1104 23:55:57.634272 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-hx5ms" podUID="24a3a22d-e704-4f02-8408-ca1de6f232f0" Nov 4 23:55:57.655296 sshd[5201]: Connection closed by 139.178.89.65 port 42260 Nov 4 23:55:57.655979 sshd-session[5198]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:57.665523 systemd[1]: sshd@23-137.184.235.85:22-139.178.89.65:42260.service: Deactivated successfully. Nov 4 23:55:57.671436 systemd[1]: session-24.scope: Deactivated successfully. Nov 4 23:55:57.674574 systemd-logind[1570]: Session 24 logged out. Waiting for processes to exit. Nov 4 23:55:57.678696 systemd-logind[1570]: Removed session 24. Nov 4 23:55:58.637184 kubelet[2830]: E1104 23:55:58.636797 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9bcffb49f-rncd9" podUID="b6a0e661-70b0-458b-8d61-43e16ce05a61" Nov 4 23:55:59.633385 kubelet[2830]: E1104 23:55:59.633181 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bc8f8875-8jrl6" podUID="92120656-4e9c-41d6-aa85-513f1a7aea60" Nov 4 23:56:00.088720 containerd[1598]: time="2025-11-04T23:56:00.088398245Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1469d86dbea8af5395702340570ad22a88bd77f9275b86366231ca08705f452c\" id:\"1e987d9b2df346f0998b71cfeec7dfe7018dc807f084a44511013efa1f230604\" pid:5224 exited_at:{seconds:1762300560 nanos:86903369}" Nov 4 23:56:01.644193 kubelet[2830]: E1104 23:56:01.644132 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-577ff57f97-lhcd2" podUID="6addfd64-c562-4f9f-bb9f-581ad89a73d8" Nov 4 23:56:02.670781 systemd[1]: Started sshd@24-137.184.235.85:22-139.178.89.65:42272.service - OpenSSH per-connection server daemon (139.178.89.65:42272). Nov 4 23:56:02.772392 sshd[5237]: Accepted publickey for core from 139.178.89.65 port 42272 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:56:02.776623 sshd-session[5237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:02.784399 systemd-logind[1570]: New session 25 of user core. Nov 4 23:56:02.788922 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 4 23:56:02.965604 sshd[5240]: Connection closed by 139.178.89.65 port 42272 Nov 4 23:56:02.966944 sshd-session[5237]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:02.974725 systemd[1]: sshd@24-137.184.235.85:22-139.178.89.65:42272.service: Deactivated successfully. Nov 4 23:56:02.978205 systemd[1]: session-25.scope: Deactivated successfully. Nov 4 23:56:02.980129 systemd-logind[1570]: Session 25 logged out. Waiting for processes to exit. Nov 4 23:56:02.982506 systemd-logind[1570]: Removed session 25. Nov 4 23:56:03.635065 kubelet[2830]: E1104 23:56:03.635027 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-577ff57f97-8frfn" podUID="72fd9e54-497f-4204-80c1-9f81d06cb75e" Nov 4 23:56:05.635259 kubelet[2830]: E1104 23:56:05.635196 2830 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2h4kv" podUID="907a3d1f-a9d8-4fa7-9529-2703403b5056"