Nov 6 00:18:38.905767 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 22:12:28 -00 2025 Nov 6 00:18:38.905801 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:18:38.905814 kernel: BIOS-provided physical RAM map: Nov 6 00:18:38.905821 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 6 00:18:38.905828 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 6 00:18:38.905834 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 6 00:18:38.905842 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Nov 6 00:18:38.905854 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Nov 6 00:18:38.905862 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 6 00:18:38.905868 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 6 00:18:38.905879 kernel: NX (Execute Disable) protection: active Nov 6 00:18:38.905886 kernel: APIC: Static calls initialized Nov 6 00:18:38.905893 kernel: SMBIOS 2.8 present. Nov 6 00:18:38.905900 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 6 00:18:38.905909 kernel: DMI: Memory slots populated: 1/1 Nov 6 00:18:38.905916 kernel: Hypervisor detected: KVM Nov 6 00:18:38.905931 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 6 00:18:38.905939 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 6 00:18:38.905947 kernel: kvm-clock: using sched offset of 4947701374 cycles Nov 6 00:18:38.905955 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 6 00:18:38.905963 kernel: tsc: Detected 2494.138 MHz processor Nov 6 00:18:38.905971 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 6 00:18:38.905979 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 6 00:18:38.905987 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 6 00:18:38.905994 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 6 00:18:38.906005 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 6 00:18:38.906013 kernel: ACPI: Early table checksum verification disabled Nov 6 00:18:38.906020 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Nov 6 00:18:38.906028 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:18:38.906037 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:18:38.906044 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:18:38.906052 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 6 00:18:38.906060 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:18:38.906067 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:18:38.906078 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:18:38.906086 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:18:38.906094 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 6 00:18:38.906101 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 6 00:18:38.906109 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 6 00:18:38.906117 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 6 00:18:38.906129 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 6 00:18:38.906139 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 6 00:18:38.906147 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 6 00:18:38.906156 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 6 00:18:38.906164 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 6 00:18:38.906183 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Nov 6 00:18:38.906192 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Nov 6 00:18:38.906200 kernel: Zone ranges: Nov 6 00:18:38.906211 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 6 00:18:38.906220 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Nov 6 00:18:38.906228 kernel: Normal empty Nov 6 00:18:38.906236 kernel: Device empty Nov 6 00:18:38.906245 kernel: Movable zone start for each node Nov 6 00:18:38.906253 kernel: Early memory node ranges Nov 6 00:18:38.906261 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 6 00:18:38.906269 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Nov 6 00:18:38.906277 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Nov 6 00:18:38.906285 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 6 00:18:38.906298 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 6 00:18:38.906306 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Nov 6 00:18:38.906314 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 6 00:18:38.906325 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 6 00:18:38.906333 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 6 00:18:38.906344 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 6 00:18:38.906352 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 6 00:18:38.906361 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 6 00:18:38.906371 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 6 00:18:38.906384 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 6 00:18:38.906392 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 6 00:18:38.906400 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 6 00:18:38.906408 kernel: TSC deadline timer available Nov 6 00:18:38.906416 kernel: CPU topo: Max. logical packages: 1 Nov 6 00:18:38.906424 kernel: CPU topo: Max. logical dies: 1 Nov 6 00:18:38.906432 kernel: CPU topo: Max. dies per package: 1 Nov 6 00:18:38.906441 kernel: CPU topo: Max. threads per core: 1 Nov 6 00:18:38.906449 kernel: CPU topo: Num. cores per package: 2 Nov 6 00:18:38.906460 kernel: CPU topo: Num. threads per package: 2 Nov 6 00:18:38.906469 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 6 00:18:38.906477 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 6 00:18:38.906485 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 6 00:18:38.906493 kernel: Booting paravirtualized kernel on KVM Nov 6 00:18:38.906501 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 6 00:18:38.906510 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 6 00:18:38.906518 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 6 00:18:38.906526 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 6 00:18:38.906537 kernel: pcpu-alloc: [0] 0 1 Nov 6 00:18:38.906545 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 6 00:18:38.906555 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:18:38.906563 kernel: random: crng init done Nov 6 00:18:38.906572 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 6 00:18:38.906580 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 6 00:18:38.906588 kernel: Fallback order for Node 0: 0 Nov 6 00:18:38.906597 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Nov 6 00:18:38.906605 kernel: Policy zone: DMA32 Nov 6 00:18:38.906617 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 6 00:18:38.906625 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 6 00:18:38.906634 kernel: Kernel/User page tables isolation: enabled Nov 6 00:18:38.906642 kernel: ftrace: allocating 40021 entries in 157 pages Nov 6 00:18:38.906650 kernel: ftrace: allocated 157 pages with 5 groups Nov 6 00:18:38.906659 kernel: Dynamic Preempt: voluntary Nov 6 00:18:38.906667 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 6 00:18:38.906677 kernel: rcu: RCU event tracing is enabled. Nov 6 00:18:38.906685 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 6 00:18:38.906697 kernel: Trampoline variant of Tasks RCU enabled. Nov 6 00:18:38.906705 kernel: Rude variant of Tasks RCU enabled. Nov 6 00:18:38.906713 kernel: Tracing variant of Tasks RCU enabled. Nov 6 00:18:38.906722 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 6 00:18:38.906730 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 6 00:18:38.906738 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 00:18:38.906749 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 00:18:38.906758 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 00:18:38.906766 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 6 00:18:38.906778 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 6 00:18:38.906786 kernel: Console: colour VGA+ 80x25 Nov 6 00:18:38.906794 kernel: printk: legacy console [tty0] enabled Nov 6 00:18:38.906802 kernel: printk: legacy console [ttyS0] enabled Nov 6 00:18:38.906811 kernel: ACPI: Core revision 20240827 Nov 6 00:18:38.906819 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 6 00:18:38.906839 kernel: APIC: Switch to symmetric I/O mode setup Nov 6 00:18:38.906851 kernel: x2apic enabled Nov 6 00:18:38.906860 kernel: APIC: Switched APIC routing to: physical x2apic Nov 6 00:18:38.906868 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 6 00:18:38.906877 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Nov 6 00:18:38.906892 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Nov 6 00:18:38.906901 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 6 00:18:38.906910 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 6 00:18:38.906919 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 6 00:18:38.906928 kernel: Spectre V2 : Mitigation: Retpolines Nov 6 00:18:38.906939 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 6 00:18:38.906948 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 6 00:18:38.906957 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 6 00:18:38.906965 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 6 00:18:38.906974 kernel: MDS: Mitigation: Clear CPU buffers Nov 6 00:18:38.906983 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 6 00:18:38.906992 kernel: active return thunk: its_return_thunk Nov 6 00:18:38.907001 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 6 00:18:38.907010 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 6 00:18:38.907022 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 6 00:18:38.907030 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 6 00:18:38.907039 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 6 00:18:38.907048 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 6 00:18:38.907057 kernel: Freeing SMP alternatives memory: 32K Nov 6 00:18:38.907065 kernel: pid_max: default: 32768 minimum: 301 Nov 6 00:18:38.907074 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 6 00:18:38.907083 kernel: landlock: Up and running. Nov 6 00:18:38.907092 kernel: SELinux: Initializing. Nov 6 00:18:38.907104 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 6 00:18:38.907112 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 6 00:18:38.907121 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 6 00:18:38.907130 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 6 00:18:38.907139 kernel: signal: max sigframe size: 1776 Nov 6 00:18:38.907148 kernel: rcu: Hierarchical SRCU implementation. Nov 6 00:18:38.907157 kernel: rcu: Max phase no-delay instances is 400. Nov 6 00:18:38.907166 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 6 00:18:38.909246 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 6 00:18:38.909281 kernel: smp: Bringing up secondary CPUs ... Nov 6 00:18:38.909297 kernel: smpboot: x86: Booting SMP configuration: Nov 6 00:18:38.909307 kernel: .... node #0, CPUs: #1 Nov 6 00:18:38.909317 kernel: smp: Brought up 1 node, 2 CPUs Nov 6 00:18:38.909326 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Nov 6 00:18:38.909337 kernel: Memory: 1960760K/2096612K available (14336K kernel code, 2436K rwdata, 26048K rodata, 45548K init, 1180K bss, 131288K reserved, 0K cma-reserved) Nov 6 00:18:38.909346 kernel: devtmpfs: initialized Nov 6 00:18:38.909356 kernel: x86/mm: Memory block size: 128MB Nov 6 00:18:38.909365 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 6 00:18:38.909379 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 6 00:18:38.909388 kernel: pinctrl core: initialized pinctrl subsystem Nov 6 00:18:38.909397 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 6 00:18:38.909406 kernel: audit: initializing netlink subsys (disabled) Nov 6 00:18:38.909416 kernel: audit: type=2000 audit(1762388315.790:1): state=initialized audit_enabled=0 res=1 Nov 6 00:18:38.909425 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 6 00:18:38.909434 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 6 00:18:38.909444 kernel: cpuidle: using governor menu Nov 6 00:18:38.909452 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 6 00:18:38.909464 kernel: dca service started, version 1.12.1 Nov 6 00:18:38.909475 kernel: PCI: Using configuration type 1 for base access Nov 6 00:18:38.909488 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 6 00:18:38.909499 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 6 00:18:38.909508 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 6 00:18:38.909594 kernel: ACPI: Added _OSI(Module Device) Nov 6 00:18:38.909605 kernel: ACPI: Added _OSI(Processor Device) Nov 6 00:18:38.909614 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 6 00:18:38.909624 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 6 00:18:38.909640 kernel: ACPI: Interpreter enabled Nov 6 00:18:38.909649 kernel: ACPI: PM: (supports S0 S5) Nov 6 00:18:38.909658 kernel: ACPI: Using IOAPIC for interrupt routing Nov 6 00:18:38.909668 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 6 00:18:38.909677 kernel: PCI: Using E820 reservations for host bridge windows Nov 6 00:18:38.909686 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 6 00:18:38.909695 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 6 00:18:38.910010 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 6 00:18:38.910127 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 6 00:18:38.910257 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 6 00:18:38.910270 kernel: acpiphp: Slot [3] registered Nov 6 00:18:38.910282 kernel: acpiphp: Slot [4] registered Nov 6 00:18:38.910297 kernel: acpiphp: Slot [5] registered Nov 6 00:18:38.910309 kernel: acpiphp: Slot [6] registered Nov 6 00:18:38.910317 kernel: acpiphp: Slot [7] registered Nov 6 00:18:38.910327 kernel: acpiphp: Slot [8] registered Nov 6 00:18:38.910342 kernel: acpiphp: Slot [9] registered Nov 6 00:18:38.910351 kernel: acpiphp: Slot [10] registered Nov 6 00:18:38.910360 kernel: acpiphp: Slot [11] registered Nov 6 00:18:38.910369 kernel: acpiphp: Slot [12] registered Nov 6 00:18:38.910378 kernel: acpiphp: Slot [13] registered Nov 6 00:18:38.910387 kernel: acpiphp: Slot [14] registered Nov 6 00:18:38.910396 kernel: acpiphp: Slot [15] registered Nov 6 00:18:38.910405 kernel: acpiphp: Slot [16] registered Nov 6 00:18:38.910414 kernel: acpiphp: Slot [17] registered Nov 6 00:18:38.910422 kernel: acpiphp: Slot [18] registered Nov 6 00:18:38.910436 kernel: acpiphp: Slot [19] registered Nov 6 00:18:38.910445 kernel: acpiphp: Slot [20] registered Nov 6 00:18:38.910454 kernel: acpiphp: Slot [21] registered Nov 6 00:18:38.910463 kernel: acpiphp: Slot [22] registered Nov 6 00:18:38.910472 kernel: acpiphp: Slot [23] registered Nov 6 00:18:38.910481 kernel: acpiphp: Slot [24] registered Nov 6 00:18:38.910489 kernel: acpiphp: Slot [25] registered Nov 6 00:18:38.910498 kernel: acpiphp: Slot [26] registered Nov 6 00:18:38.910507 kernel: acpiphp: Slot [27] registered Nov 6 00:18:38.910520 kernel: acpiphp: Slot [28] registered Nov 6 00:18:38.910529 kernel: acpiphp: Slot [29] registered Nov 6 00:18:38.910537 kernel: acpiphp: Slot [30] registered Nov 6 00:18:38.910546 kernel: acpiphp: Slot [31] registered Nov 6 00:18:38.910555 kernel: PCI host bridge to bus 0000:00 Nov 6 00:18:38.910708 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 6 00:18:38.910813 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 6 00:18:38.910929 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 6 00:18:38.911022 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 6 00:18:38.911117 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 6 00:18:38.912695 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 6 00:18:38.912870 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Nov 6 00:18:38.913041 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Nov 6 00:18:38.913190 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Nov 6 00:18:38.913351 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Nov 6 00:18:38.913450 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Nov 6 00:18:38.913567 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Nov 6 00:18:38.913672 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Nov 6 00:18:38.913781 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Nov 6 00:18:38.913912 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Nov 6 00:18:38.914007 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Nov 6 00:18:38.914120 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Nov 6 00:18:38.917331 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 6 00:18:38.917467 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 6 00:18:38.917633 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Nov 6 00:18:38.917755 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Nov 6 00:18:38.917901 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Nov 6 00:18:38.918048 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Nov 6 00:18:38.918152 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Nov 6 00:18:38.918274 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 6 00:18:38.918384 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 6 00:18:38.918498 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Nov 6 00:18:38.918603 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Nov 6 00:18:38.918737 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Nov 6 00:18:38.918907 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 6 00:18:38.919007 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Nov 6 00:18:38.919103 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Nov 6 00:18:38.923342 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 6 00:18:38.923552 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Nov 6 00:18:38.923657 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Nov 6 00:18:38.923754 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Nov 6 00:18:38.923864 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 6 00:18:38.923976 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 6 00:18:38.924073 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Nov 6 00:18:38.924167 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Nov 6 00:18:38.924325 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Nov 6 00:18:38.924445 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 6 00:18:38.924541 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Nov 6 00:18:38.924642 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Nov 6 00:18:38.924737 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Nov 6 00:18:38.924838 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Nov 6 00:18:38.924932 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Nov 6 00:18:38.925025 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 6 00:18:38.925037 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 6 00:18:38.925051 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 6 00:18:38.925060 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 6 00:18:38.925069 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 6 00:18:38.925079 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 6 00:18:38.925088 kernel: iommu: Default domain type: Translated Nov 6 00:18:38.925097 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 6 00:18:38.925106 kernel: PCI: Using ACPI for IRQ routing Nov 6 00:18:38.925115 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 6 00:18:38.925124 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 6 00:18:38.925137 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Nov 6 00:18:38.925249 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 6 00:18:38.925343 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 6 00:18:38.925436 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 6 00:18:38.925448 kernel: vgaarb: loaded Nov 6 00:18:38.925457 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 6 00:18:38.925467 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 6 00:18:38.925475 kernel: clocksource: Switched to clocksource kvm-clock Nov 6 00:18:38.925484 kernel: VFS: Disk quotas dquot_6.6.0 Nov 6 00:18:38.925499 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 6 00:18:38.925522 kernel: pnp: PnP ACPI init Nov 6 00:18:38.925536 kernel: pnp: PnP ACPI: found 4 devices Nov 6 00:18:38.925550 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 6 00:18:38.925564 kernel: NET: Registered PF_INET protocol family Nov 6 00:18:38.925574 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 6 00:18:38.925583 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 6 00:18:38.925592 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 6 00:18:38.925601 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 6 00:18:38.925615 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 6 00:18:38.925625 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 6 00:18:38.925634 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 6 00:18:38.925643 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 6 00:18:38.925652 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 6 00:18:38.925661 kernel: NET: Registered PF_XDP protocol family Nov 6 00:18:38.925767 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 6 00:18:38.925854 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 6 00:18:38.925950 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 6 00:18:38.926033 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 6 00:18:38.926127 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 6 00:18:38.927018 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 6 00:18:38.927156 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 6 00:18:38.927188 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 6 00:18:38.927290 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 28195 usecs Nov 6 00:18:38.927306 kernel: PCI: CLS 0 bytes, default 64 Nov 6 00:18:38.927331 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 6 00:18:38.927345 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Nov 6 00:18:38.927357 kernel: Initialise system trusted keyrings Nov 6 00:18:38.927371 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 6 00:18:38.927381 kernel: Key type asymmetric registered Nov 6 00:18:38.927390 kernel: Asymmetric key parser 'x509' registered Nov 6 00:18:38.927400 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 6 00:18:38.927409 kernel: io scheduler mq-deadline registered Nov 6 00:18:38.927419 kernel: io scheduler kyber registered Nov 6 00:18:38.927432 kernel: io scheduler bfq registered Nov 6 00:18:38.927441 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 6 00:18:38.927450 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 6 00:18:38.927459 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 6 00:18:38.927468 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 6 00:18:38.927477 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 6 00:18:38.927486 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 00:18:38.927495 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 6 00:18:38.927504 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 6 00:18:38.927517 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 6 00:18:38.927646 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 6 00:18:38.927660 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Nov 6 00:18:38.927747 kernel: rtc_cmos 00:03: registered as rtc0 Nov 6 00:18:38.927852 kernel: rtc_cmos 00:03: setting system clock to 2025-11-06T00:18:38 UTC (1762388318) Nov 6 00:18:38.927973 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 6 00:18:38.927991 kernel: intel_pstate: CPU model not supported Nov 6 00:18:38.928004 kernel: NET: Registered PF_INET6 protocol family Nov 6 00:18:38.928020 kernel: Segment Routing with IPv6 Nov 6 00:18:38.928029 kernel: In-situ OAM (IOAM) with IPv6 Nov 6 00:18:38.928039 kernel: NET: Registered PF_PACKET protocol family Nov 6 00:18:38.928048 kernel: Key type dns_resolver registered Nov 6 00:18:38.928057 kernel: IPI shorthand broadcast: enabled Nov 6 00:18:38.928066 kernel: sched_clock: Marking stable (3562004611, 158429285)->(3748843065, -28409169) Nov 6 00:18:38.928075 kernel: registered taskstats version 1 Nov 6 00:18:38.928085 kernel: Loading compiled-in X.509 certificates Nov 6 00:18:38.928094 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: f906521ec29cbf079ae365554bad8eb8ed6ecb31' Nov 6 00:18:38.928106 kernel: Demotion targets for Node 0: null Nov 6 00:18:38.928115 kernel: Key type .fscrypt registered Nov 6 00:18:38.928124 kernel: Key type fscrypt-provisioning registered Nov 6 00:18:38.928157 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 6 00:18:38.928170 kernel: ima: Allocated hash algorithm: sha1 Nov 6 00:18:38.928360 kernel: ima: No architecture policies found Nov 6 00:18:38.928370 kernel: clk: Disabling unused clocks Nov 6 00:18:38.928380 kernel: Warning: unable to open an initial console. Nov 6 00:18:38.928390 kernel: Freeing unused kernel image (initmem) memory: 45548K Nov 6 00:18:38.928405 kernel: Write protecting the kernel read-only data: 40960k Nov 6 00:18:38.928415 kernel: Freeing unused kernel image (rodata/data gap) memory: 576K Nov 6 00:18:38.928424 kernel: Run /init as init process Nov 6 00:18:38.928435 kernel: with arguments: Nov 6 00:18:38.928445 kernel: /init Nov 6 00:18:38.928455 kernel: with environment: Nov 6 00:18:38.928464 kernel: HOME=/ Nov 6 00:18:38.928473 kernel: TERM=linux Nov 6 00:18:38.928485 systemd[1]: Successfully made /usr/ read-only. Nov 6 00:18:38.928503 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:18:38.928514 systemd[1]: Detected virtualization kvm. Nov 6 00:18:38.928523 systemd[1]: Detected architecture x86-64. Nov 6 00:18:38.928534 systemd[1]: Running in initrd. Nov 6 00:18:38.928544 systemd[1]: No hostname configured, using default hostname. Nov 6 00:18:38.928554 systemd[1]: Hostname set to . Nov 6 00:18:38.928564 systemd[1]: Initializing machine ID from VM UUID. Nov 6 00:18:38.928577 systemd[1]: Queued start job for default target initrd.target. Nov 6 00:18:38.928587 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:18:38.928598 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:18:38.928609 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 6 00:18:38.928620 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:18:38.928631 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 6 00:18:38.928645 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 6 00:18:38.928656 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 6 00:18:38.928667 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 6 00:18:38.928677 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:18:38.928687 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:18:38.928697 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:18:38.928711 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:18:38.928721 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:18:38.928731 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:18:38.928741 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:18:38.928751 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:18:38.928761 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 6 00:18:38.928771 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 6 00:18:38.928781 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:18:38.928791 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:18:38.928805 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:18:38.928815 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:18:38.928825 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 6 00:18:38.928835 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:18:38.928906 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 6 00:18:38.930234 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 6 00:18:38.930251 systemd[1]: Starting systemd-fsck-usr.service... Nov 6 00:18:38.930262 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:18:38.930282 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:18:38.930292 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:18:38.930306 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 6 00:18:38.930317 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:18:38.930331 systemd[1]: Finished systemd-fsck-usr.service. Nov 6 00:18:38.930401 systemd-journald[192]: Collecting audit messages is disabled. Nov 6 00:18:38.930427 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 00:18:38.930440 systemd-journald[192]: Journal started Nov 6 00:18:38.930467 systemd-journald[192]: Runtime Journal (/run/log/journal/e184d2cc7a544bce9f9a8952e2f527b2) is 4.9M, max 39.2M, 34.3M free. Nov 6 00:18:38.916568 systemd-modules-load[195]: Inserted module 'overlay' Nov 6 00:18:38.940220 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:18:38.945305 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:18:39.000397 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 6 00:18:39.000434 kernel: Bridge firewalling registered Nov 6 00:18:38.959763 systemd-modules-load[195]: Inserted module 'br_netfilter' Nov 6 00:18:39.001956 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:18:39.002665 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:18:39.005587 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 00:18:39.007402 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:18:39.011365 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:18:39.015912 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:18:39.035435 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:18:39.036809 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:18:39.041590 systemd-tmpfiles[213]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 6 00:18:39.046635 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:18:39.047580 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:18:39.050081 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 6 00:18:39.053347 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:18:39.087807 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:18:39.108977 systemd-resolved[232]: Positive Trust Anchors: Nov 6 00:18:39.109003 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:18:39.109053 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:18:39.112871 systemd-resolved[232]: Defaulting to hostname 'linux'. Nov 6 00:18:39.115135 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:18:39.115905 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:18:39.211231 kernel: SCSI subsystem initialized Nov 6 00:18:39.221221 kernel: Loading iSCSI transport class v2.0-870. Nov 6 00:18:39.233205 kernel: iscsi: registered transport (tcp) Nov 6 00:18:39.256219 kernel: iscsi: registered transport (qla4xxx) Nov 6 00:18:39.256318 kernel: QLogic iSCSI HBA Driver Nov 6 00:18:39.283520 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:18:39.314527 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:18:39.317361 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:18:39.389786 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 6 00:18:39.393002 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 6 00:18:39.468268 kernel: raid6: avx2x4 gen() 12364 MB/s Nov 6 00:18:39.486262 kernel: raid6: avx2x2 gen() 12875 MB/s Nov 6 00:18:39.504535 kernel: raid6: avx2x1 gen() 10040 MB/s Nov 6 00:18:39.504668 kernel: raid6: using algorithm avx2x2 gen() 12875 MB/s Nov 6 00:18:39.523574 kernel: raid6: .... xor() 12684 MB/s, rmw enabled Nov 6 00:18:39.523716 kernel: raid6: using avx2x2 recovery algorithm Nov 6 00:18:39.551231 kernel: xor: automatically using best checksumming function avx Nov 6 00:18:39.783232 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 6 00:18:39.792960 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:18:39.796568 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:18:39.835280 systemd-udevd[441]: Using default interface naming scheme 'v255'. Nov 6 00:18:39.845337 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:18:39.849313 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 6 00:18:39.885984 dracut-pre-trigger[447]: rd.md=0: removing MD RAID activation Nov 6 00:18:39.924396 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:18:39.926823 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:18:40.022763 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:18:40.026894 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 6 00:18:40.108215 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Nov 6 00:18:40.118199 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Nov 6 00:18:40.125493 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 6 00:18:40.139216 kernel: scsi host0: Virtio SCSI HBA Nov 6 00:18:40.153666 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 6 00:18:40.153752 kernel: GPT:9289727 != 125829119 Nov 6 00:18:40.153774 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 6 00:18:40.153788 kernel: GPT:9289727 != 125829119 Nov 6 00:18:40.153799 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 6 00:18:40.153811 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 00:18:40.156194 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Nov 6 00:18:40.159474 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Nov 6 00:18:40.177702 kernel: cryptd: max_cpu_qlen set to 1000 Nov 6 00:18:40.177781 kernel: ACPI: bus type USB registered Nov 6 00:18:40.181199 kernel: usbcore: registered new interface driver usbfs Nov 6 00:18:40.183202 kernel: usbcore: registered new interface driver hub Nov 6 00:18:40.187199 kernel: usbcore: registered new device driver usb Nov 6 00:18:40.224210 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 6 00:18:40.245293 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:18:40.252228 kernel: AES CTR mode by8 optimization enabled Nov 6 00:18:40.252672 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:18:40.255014 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:18:40.258395 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:18:40.262411 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:18:40.307519 kernel: libata version 3.00 loaded. Nov 6 00:18:40.317197 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 6 00:18:40.335206 kernel: scsi host1: ata_piix Nov 6 00:18:40.336192 kernel: scsi host2: ata_piix Nov 6 00:18:40.336674 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Nov 6 00:18:40.336690 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Nov 6 00:18:40.345246 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 6 00:18:40.406537 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 6 00:18:40.406817 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 6 00:18:40.406939 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 6 00:18:40.407496 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Nov 6 00:18:40.407651 kernel: hub 1-0:1.0: USB hub found Nov 6 00:18:40.407807 kernel: hub 1-0:1.0: 2 ports detected Nov 6 00:18:40.409213 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:18:40.437255 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 00:18:40.450246 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 6 00:18:40.460688 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 6 00:18:40.461515 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 6 00:18:40.464396 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 6 00:18:40.496207 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 00:18:40.497600 disk-uuid[590]: Primary Header is updated. Nov 6 00:18:40.497600 disk-uuid[590]: Secondary Entries is updated. Nov 6 00:18:40.497600 disk-uuid[590]: Secondary Header is updated. Nov 6 00:18:40.652191 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 6 00:18:40.653906 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:18:40.654500 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:18:40.655517 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:18:40.657764 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 6 00:18:40.688033 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:18:41.516267 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 00:18:41.518302 disk-uuid[591]: The operation has completed successfully. Nov 6 00:18:41.571418 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 6 00:18:41.571624 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 6 00:18:41.597012 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 6 00:18:41.616542 sh[615]: Success Nov 6 00:18:41.637221 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 6 00:18:41.640027 kernel: device-mapper: uevent: version 1.0.3 Nov 6 00:18:41.640109 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 6 00:18:41.650226 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Nov 6 00:18:41.709041 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 6 00:18:41.710694 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 6 00:18:41.721748 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 6 00:18:41.733216 kernel: BTRFS: device fsid 85d805c5-984c-4a6a-aaeb-49fff3689175 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (627) Nov 6 00:18:41.736212 kernel: BTRFS info (device dm-0): first mount of filesystem 85d805c5-984c-4a6a-aaeb-49fff3689175 Nov 6 00:18:41.736299 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:18:41.746508 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 6 00:18:41.746606 kernel: BTRFS info (device dm-0): enabling free space tree Nov 6 00:18:41.748604 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 6 00:18:41.750405 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:18:41.751733 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 6 00:18:41.753730 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 6 00:18:41.755524 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 6 00:18:41.788235 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (659) Nov 6 00:18:41.791195 kernel: BTRFS info (device vda6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:18:41.793211 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:18:41.798739 kernel: BTRFS info (device vda6): turning on async discard Nov 6 00:18:41.798854 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 00:18:41.805199 kernel: BTRFS info (device vda6): last unmount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:18:41.807838 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 6 00:18:41.809730 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 6 00:18:41.950424 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:18:41.954370 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:18:41.997123 systemd-networkd[802]: lo: Link UP Nov 6 00:18:41.997135 systemd-networkd[802]: lo: Gained carrier Nov 6 00:18:41.998817 ignition[707]: Ignition 2.22.0 Nov 6 00:18:41.999596 systemd-networkd[802]: Enumeration completed Nov 6 00:18:41.998829 ignition[707]: Stage: fetch-offline Nov 6 00:18:41.999779 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:18:41.998871 ignition[707]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:18:42.000229 systemd-networkd[802]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 6 00:18:41.998882 ignition[707]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 6 00:18:42.000233 systemd-networkd[802]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 6 00:18:42.005395 ignition[707]: parsed url from cmdline: "" Nov 6 00:18:42.001544 systemd-networkd[802]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:18:42.005402 ignition[707]: no config URL provided Nov 6 00:18:42.001552 systemd-networkd[802]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:18:42.005411 ignition[707]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 00:18:42.002097 systemd[1]: Reached target network.target - Network. Nov 6 00:18:42.005425 ignition[707]: no config at "/usr/lib/ignition/user.ign" Nov 6 00:18:42.002914 systemd-networkd[802]: eth0: Link UP Nov 6 00:18:42.005433 ignition[707]: failed to fetch config: resource requires networking Nov 6 00:18:42.003076 systemd-networkd[802]: eth1: Link UP Nov 6 00:18:42.005797 ignition[707]: Ignition finished successfully Nov 6 00:18:42.003257 systemd-networkd[802]: eth0: Gained carrier Nov 6 00:18:42.003269 systemd-networkd[802]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 6 00:18:42.006850 systemd-networkd[802]: eth1: Gained carrier Nov 6 00:18:42.006868 systemd-networkd[802]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:18:42.008959 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:18:42.011107 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 6 00:18:42.018329 systemd-networkd[802]: eth0: DHCPv4 address 64.23.183.231/20, gateway 64.23.176.1 acquired from 169.254.169.253 Nov 6 00:18:42.031325 systemd-networkd[802]: eth1: DHCPv4 address 10.124.0.27/20 acquired from 169.254.169.253 Nov 6 00:18:42.051649 ignition[806]: Ignition 2.22.0 Nov 6 00:18:42.051681 ignition[806]: Stage: fetch Nov 6 00:18:42.051851 ignition[806]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:18:42.051862 ignition[806]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 6 00:18:42.051985 ignition[806]: parsed url from cmdline: "" Nov 6 00:18:42.051989 ignition[806]: no config URL provided Nov 6 00:18:42.051995 ignition[806]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 00:18:42.052003 ignition[806]: no config at "/usr/lib/ignition/user.ign" Nov 6 00:18:42.052042 ignition[806]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 6 00:18:42.083554 ignition[806]: GET result: OK Nov 6 00:18:42.085601 ignition[806]: parsing config with SHA512: a042a45d981453046bd5a9284d98a9f30831e09a2730a45f2b5363c6e0c3db37e1806562ae5f55d64c906f01364b9b0b9e64671b74e589854198e63b3480bbfb Nov 6 00:18:42.094850 unknown[806]: fetched base config from "system" Nov 6 00:18:42.094864 unknown[806]: fetched base config from "system" Nov 6 00:18:42.094870 unknown[806]: fetched user config from "digitalocean" Nov 6 00:18:42.095722 ignition[806]: fetch: fetch complete Nov 6 00:18:42.095728 ignition[806]: fetch: fetch passed Nov 6 00:18:42.095798 ignition[806]: Ignition finished successfully Nov 6 00:18:42.099404 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 6 00:18:42.101826 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 6 00:18:42.139311 ignition[813]: Ignition 2.22.0 Nov 6 00:18:42.139324 ignition[813]: Stage: kargs Nov 6 00:18:42.139496 ignition[813]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:18:42.139507 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 6 00:18:42.141851 ignition[813]: kargs: kargs passed Nov 6 00:18:42.141921 ignition[813]: Ignition finished successfully Nov 6 00:18:42.144992 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 6 00:18:42.146806 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 6 00:18:42.184581 ignition[820]: Ignition 2.22.0 Nov 6 00:18:42.185723 ignition[820]: Stage: disks Nov 6 00:18:42.186547 ignition[820]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:18:42.186561 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 6 00:18:42.187703 ignition[820]: disks: disks passed Nov 6 00:18:42.187760 ignition[820]: Ignition finished successfully Nov 6 00:18:42.191901 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 6 00:18:42.192754 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 6 00:18:42.193440 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 6 00:18:42.194503 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:18:42.195508 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:18:42.196377 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:18:42.198504 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 6 00:18:42.229223 systemd-fsck[829]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 6 00:18:42.233277 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 6 00:18:42.236984 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 6 00:18:42.391213 kernel: EXT4-fs (vda9): mounted filesystem 25ee01aa-0270-4de7-b5da-d8936d968d16 r/w with ordered data mode. Quota mode: none. Nov 6 00:18:42.392334 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 6 00:18:42.393383 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 6 00:18:42.396822 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:18:42.398979 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 6 00:18:42.403700 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Nov 6 00:18:42.412380 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 6 00:18:42.414406 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 6 00:18:42.414527 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:18:42.425215 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (837) Nov 6 00:18:42.430205 kernel: BTRFS info (device vda6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:18:42.430279 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:18:42.439195 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 6 00:18:42.443809 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 6 00:18:42.455879 kernel: BTRFS info (device vda6): turning on async discard Nov 6 00:18:42.455938 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 00:18:42.463933 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:18:42.523767 coreos-metadata[839]: Nov 06 00:18:42.523 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 6 00:18:42.537279 coreos-metadata[839]: Nov 06 00:18:42.537 INFO Fetch successful Nov 6 00:18:42.546168 coreos-metadata[840]: Nov 06 00:18:42.546 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 6 00:18:42.549957 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Nov 6 00:18:42.550190 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Nov 6 00:18:42.558128 coreos-metadata[840]: Nov 06 00:18:42.557 INFO Fetch successful Nov 6 00:18:42.559612 initrd-setup-root[868]: cut: /sysroot/etc/passwd: No such file or directory Nov 6 00:18:42.567393 initrd-setup-root[875]: cut: /sysroot/etc/group: No such file or directory Nov 6 00:18:42.571304 coreos-metadata[840]: Nov 06 00:18:42.571 INFO wrote hostname ci-4459.1.0-n-46450dc2d5 to /sysroot/etc/hostname Nov 6 00:18:42.573937 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 6 00:18:42.580659 initrd-setup-root[883]: cut: /sysroot/etc/shadow: No such file or directory Nov 6 00:18:42.587564 initrd-setup-root[890]: cut: /sysroot/etc/gshadow: No such file or directory Nov 6 00:18:42.709092 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 6 00:18:42.712333 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 6 00:18:42.715354 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 6 00:18:42.744714 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 6 00:18:42.746294 kernel: BTRFS info (device vda6): last unmount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:18:42.764886 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 6 00:18:42.796777 ignition[959]: INFO : Ignition 2.22.0 Nov 6 00:18:42.798834 ignition[959]: INFO : Stage: mount Nov 6 00:18:42.798834 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:18:42.798834 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 6 00:18:42.800657 ignition[959]: INFO : mount: mount passed Nov 6 00:18:42.802257 ignition[959]: INFO : Ignition finished successfully Nov 6 00:18:42.803716 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 6 00:18:42.806870 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 6 00:18:42.835464 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:18:42.871234 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (969) Nov 6 00:18:42.875355 kernel: BTRFS info (device vda6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:18:42.875449 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:18:42.879467 kernel: BTRFS info (device vda6): turning on async discard Nov 6 00:18:42.879567 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 00:18:42.883086 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:18:42.934211 ignition[986]: INFO : Ignition 2.22.0 Nov 6 00:18:42.934211 ignition[986]: INFO : Stage: files Nov 6 00:18:42.935747 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:18:42.935747 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 6 00:18:42.935747 ignition[986]: DEBUG : files: compiled without relabeling support, skipping Nov 6 00:18:42.938264 ignition[986]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 6 00:18:42.938264 ignition[986]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 6 00:18:42.941420 ignition[986]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 6 00:18:42.942271 ignition[986]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 6 00:18:42.943521 unknown[986]: wrote ssh authorized keys file for user: core Nov 6 00:18:42.944474 ignition[986]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 6 00:18:42.946224 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 00:18:42.947037 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 6 00:18:42.972302 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 6 00:18:43.031932 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 00:18:43.031932 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 6 00:18:43.034977 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 6 00:18:43.034977 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:18:43.034977 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:18:43.034977 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:18:43.034977 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:18:43.034977 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:18:43.034977 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:18:43.034977 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:18:43.034977 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:18:43.034977 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 00:18:43.034977 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 00:18:43.034977 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 00:18:43.050672 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 6 00:18:43.474689 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 6 00:18:43.851385 systemd-networkd[802]: eth0: Gained IPv6LL Nov 6 00:18:43.915424 systemd-networkd[802]: eth1: Gained IPv6LL Nov 6 00:18:44.010100 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 00:18:44.010100 ignition[986]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 6 00:18:44.013111 ignition[986]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:18:44.015223 ignition[986]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:18:44.015223 ignition[986]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 6 00:18:44.015223 ignition[986]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 6 00:18:44.015223 ignition[986]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 6 00:18:44.021107 ignition[986]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:18:44.021107 ignition[986]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:18:44.021107 ignition[986]: INFO : files: files passed Nov 6 00:18:44.021107 ignition[986]: INFO : Ignition finished successfully Nov 6 00:18:44.018960 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 6 00:18:44.022970 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 6 00:18:44.029436 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 6 00:18:44.052787 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 6 00:18:44.052928 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 6 00:18:44.063844 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:18:44.063844 initrd-setup-root-after-ignition[1015]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:18:44.066587 initrd-setup-root-after-ignition[1019]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:18:44.068242 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:18:44.069443 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 6 00:18:44.071484 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 6 00:18:44.149734 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 6 00:18:44.149903 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 6 00:18:44.151540 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 6 00:18:44.152622 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 6 00:18:44.153990 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 6 00:18:44.156233 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 6 00:18:44.183029 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:18:44.187864 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 6 00:18:44.215625 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:18:44.217938 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:18:44.218963 systemd[1]: Stopped target timers.target - Timer Units. Nov 6 00:18:44.220160 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 6 00:18:44.220471 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:18:44.222467 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 6 00:18:44.223519 systemd[1]: Stopped target basic.target - Basic System. Nov 6 00:18:44.224510 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 6 00:18:44.225555 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:18:44.226878 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 6 00:18:44.228034 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:18:44.229255 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 6 00:18:44.230587 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:18:44.231862 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 6 00:18:44.233004 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 6 00:18:44.234318 systemd[1]: Stopped target swap.target - Swaps. Nov 6 00:18:44.235214 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 6 00:18:44.235482 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:18:44.236970 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:18:44.238426 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:18:44.239509 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 6 00:18:44.239721 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:18:44.240881 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 6 00:18:44.241210 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 6 00:18:44.242726 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 6 00:18:44.243025 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:18:44.244624 systemd[1]: ignition-files.service: Deactivated successfully. Nov 6 00:18:44.244896 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 6 00:18:44.246482 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 6 00:18:44.246732 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 6 00:18:44.249953 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 6 00:18:44.250735 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 6 00:18:44.251035 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:18:44.254532 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 6 00:18:44.255578 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 6 00:18:44.257522 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:18:44.259541 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 6 00:18:44.261426 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:18:44.277665 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 6 00:18:44.278058 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 6 00:18:44.301207 ignition[1039]: INFO : Ignition 2.22.0 Nov 6 00:18:44.301207 ignition[1039]: INFO : Stage: umount Nov 6 00:18:44.301207 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:18:44.301207 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 6 00:18:44.306038 ignition[1039]: INFO : umount: umount passed Nov 6 00:18:44.306038 ignition[1039]: INFO : Ignition finished successfully Nov 6 00:18:44.306957 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 6 00:18:44.308907 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 6 00:18:44.309699 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 6 00:18:44.311404 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 6 00:18:44.312121 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 6 00:18:44.312864 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 6 00:18:44.312924 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 6 00:18:44.314422 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 6 00:18:44.314517 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 6 00:18:44.323265 systemd[1]: Stopped target network.target - Network. Nov 6 00:18:44.324086 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 6 00:18:44.324273 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:18:44.325296 systemd[1]: Stopped target paths.target - Path Units. Nov 6 00:18:44.338452 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 6 00:18:44.343300 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:18:44.343839 systemd[1]: Stopped target slices.target - Slice Units. Nov 6 00:18:44.344340 systemd[1]: Stopped target sockets.target - Socket Units. Nov 6 00:18:44.344803 systemd[1]: iscsid.socket: Deactivated successfully. Nov 6 00:18:44.344856 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:18:44.346361 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 6 00:18:44.346414 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:18:44.347455 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 6 00:18:44.347534 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 6 00:18:44.348508 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 6 00:18:44.348564 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 6 00:18:44.349816 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 6 00:18:44.350981 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 6 00:18:44.353040 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 6 00:18:44.353153 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 6 00:18:44.354589 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 6 00:18:44.354705 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 6 00:18:44.361375 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 6 00:18:44.361635 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 6 00:18:44.364678 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 6 00:18:44.364980 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 6 00:18:44.365142 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 6 00:18:44.369613 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 6 00:18:44.370744 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 6 00:18:44.372115 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 6 00:18:44.372165 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:18:44.375303 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 6 00:18:44.376519 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 6 00:18:44.376613 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:18:44.380317 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 00:18:44.380415 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:18:44.382865 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 6 00:18:44.382935 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 6 00:18:44.385641 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 6 00:18:44.385733 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:18:44.386396 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:18:44.397068 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 6 00:18:44.397351 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:18:44.404828 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 6 00:18:44.405091 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:18:44.406899 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 6 00:18:44.407013 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 6 00:18:44.409209 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 6 00:18:44.409265 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:18:44.410469 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 6 00:18:44.410557 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:18:44.413777 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 6 00:18:44.414284 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 6 00:18:44.415919 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 00:18:44.416004 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:18:44.422827 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 6 00:18:44.423407 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 6 00:18:44.423499 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:18:44.426561 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 6 00:18:44.426636 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:18:44.428576 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:18:44.428654 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:18:44.432027 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 6 00:18:44.432102 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 6 00:18:44.432164 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:18:44.432714 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 6 00:18:44.432854 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 6 00:18:44.449630 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 6 00:18:44.449761 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 6 00:18:44.451760 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 6 00:18:44.454524 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 6 00:18:44.483472 systemd[1]: Switching root. Nov 6 00:18:44.533745 systemd-journald[192]: Journal stopped Nov 6 00:18:45.890436 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Nov 6 00:18:45.890555 kernel: SELinux: policy capability network_peer_controls=1 Nov 6 00:18:45.890580 kernel: SELinux: policy capability open_perms=1 Nov 6 00:18:45.890600 kernel: SELinux: policy capability extended_socket_class=1 Nov 6 00:18:45.890627 kernel: SELinux: policy capability always_check_network=0 Nov 6 00:18:45.890654 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 6 00:18:45.890681 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 6 00:18:45.890706 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 6 00:18:45.890731 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 6 00:18:45.890750 kernel: SELinux: policy capability userspace_initial_context=0 Nov 6 00:18:45.890771 kernel: audit: type=1403 audit(1762388324.704:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 6 00:18:45.890792 systemd[1]: Successfully loaded SELinux policy in 72.927ms. Nov 6 00:18:45.890817 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.335ms. Nov 6 00:18:45.890840 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:18:45.890869 systemd[1]: Detected virtualization kvm. Nov 6 00:18:45.890893 systemd[1]: Detected architecture x86-64. Nov 6 00:18:45.890915 systemd[1]: Detected first boot. Nov 6 00:18:45.890937 systemd[1]: Hostname set to . Nov 6 00:18:45.890958 systemd[1]: Initializing machine ID from VM UUID. Nov 6 00:18:45.890979 zram_generator::config[1087]: No configuration found. Nov 6 00:18:45.891005 kernel: Guest personality initialized and is inactive Nov 6 00:18:45.891024 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 6 00:18:45.891043 kernel: Initialized host personality Nov 6 00:18:45.891063 kernel: NET: Registered PF_VSOCK protocol family Nov 6 00:18:45.891087 systemd[1]: Populated /etc with preset unit settings. Nov 6 00:18:45.891111 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 6 00:18:45.891132 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 6 00:18:45.891154 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 6 00:18:45.891193 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 6 00:18:45.891215 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 6 00:18:45.891237 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 6 00:18:45.891257 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 6 00:18:45.891278 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 6 00:18:45.891303 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 6 00:18:45.891324 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 6 00:18:45.891346 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 6 00:18:45.891367 systemd[1]: Created slice user.slice - User and Session Slice. Nov 6 00:18:45.891394 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:18:45.891415 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:18:45.891436 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 6 00:18:45.891461 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 6 00:18:45.891482 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 6 00:18:45.891503 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:18:45.891524 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 6 00:18:45.891545 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:18:45.891565 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:18:45.891587 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 6 00:18:45.891608 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 6 00:18:45.891632 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 6 00:18:45.891654 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 6 00:18:45.891676 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:18:45.891697 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:18:45.891719 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:18:45.891738 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:18:45.891757 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 6 00:18:45.891775 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 6 00:18:45.891794 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 6 00:18:45.891817 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:18:45.891835 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:18:45.891853 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:18:45.891871 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 6 00:18:45.891890 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 6 00:18:45.891909 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 6 00:18:45.891927 systemd[1]: Mounting media.mount - External Media Directory... Nov 6 00:18:45.891947 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:18:45.891967 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 6 00:18:45.891992 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 6 00:18:45.892011 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 6 00:18:45.892034 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 6 00:18:45.892061 systemd[1]: Reached target machines.target - Containers. Nov 6 00:18:45.892081 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 6 00:18:45.892102 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:18:45.892122 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:18:45.892141 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 00:18:45.892164 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:18:45.894254 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:18:45.894297 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:18:45.894319 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 00:18:45.894341 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:18:45.894366 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 6 00:18:45.894388 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 6 00:18:45.894410 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 6 00:18:45.894462 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 6 00:18:45.894492 systemd[1]: Stopped systemd-fsck-usr.service. Nov 6 00:18:45.894516 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:18:45.894537 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:18:45.894563 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:18:45.894583 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:18:45.894607 kernel: loop: module loaded Nov 6 00:18:45.894629 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 6 00:18:45.894649 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 6 00:18:45.894669 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:18:45.894689 systemd[1]: verity-setup.service: Deactivated successfully. Nov 6 00:18:45.894713 systemd[1]: Stopped verity-setup.service. Nov 6 00:18:45.894735 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:18:45.894755 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 6 00:18:45.894776 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 6 00:18:45.894796 systemd[1]: Mounted media.mount - External Media Directory. Nov 6 00:18:45.894816 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 6 00:18:45.894835 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 6 00:18:45.894856 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 6 00:18:45.894880 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:18:45.894902 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 00:18:45.894933 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 00:18:45.894955 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:18:45.894975 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:18:45.894997 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:18:45.895017 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:18:45.895039 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:18:45.895060 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:18:45.895086 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 6 00:18:45.895108 kernel: ACPI: bus type drm_connector registered Nov 6 00:18:45.895129 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 6 00:18:45.895150 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 6 00:18:45.895217 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:18:45.895241 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 6 00:18:45.895262 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 6 00:18:45.895284 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:18:45.895311 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 6 00:18:45.895336 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:18:45.895356 kernel: fuse: init (API version 7.41) Nov 6 00:18:45.895376 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 6 00:18:45.895397 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:18:45.895417 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 6 00:18:45.895437 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:18:45.895458 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:18:45.895479 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 00:18:45.895501 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 00:18:45.895605 systemd-journald[1160]: Collecting audit messages is disabled. Nov 6 00:18:45.895649 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:18:45.895671 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:18:45.895692 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 6 00:18:45.895714 systemd-journald[1160]: Journal started Nov 6 00:18:45.895752 systemd-journald[1160]: Runtime Journal (/run/log/journal/e184d2cc7a544bce9f9a8952e2f527b2) is 4.9M, max 39.2M, 34.3M free. Nov 6 00:18:45.411137 systemd[1]: Queued start job for default target multi-user.target. Nov 6 00:18:45.426127 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 6 00:18:45.426637 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 6 00:18:45.900416 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:18:45.928984 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 6 00:18:45.963652 kernel: loop0: detected capacity change from 0 to 110984 Nov 6 00:18:45.958607 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:18:45.964327 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 6 00:18:45.978169 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 6 00:18:45.988726 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 6 00:18:45.993479 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:18:46.004541 kernel: loop1: detected capacity change from 0 to 8 Nov 6 00:18:46.005025 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 6 00:18:46.006545 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 6 00:18:46.011676 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 6 00:18:46.020301 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 6 00:18:46.028344 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 6 00:18:46.033490 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 6 00:18:46.036212 kernel: loop2: detected capacity change from 0 to 128016 Nov 6 00:18:46.113993 kernel: loop3: detected capacity change from 0 to 219144 Nov 6 00:18:46.115086 systemd-journald[1160]: Time spent on flushing to /var/log/journal/e184d2cc7a544bce9f9a8952e2f527b2 is 81.619ms for 1018 entries. Nov 6 00:18:46.115086 systemd-journald[1160]: System Journal (/var/log/journal/e184d2cc7a544bce9f9a8952e2f527b2) is 8M, max 195.6M, 187.6M free. Nov 6 00:18:46.241367 systemd-journald[1160]: Received client request to flush runtime journal. Nov 6 00:18:46.242075 kernel: loop4: detected capacity change from 0 to 110984 Nov 6 00:18:46.242131 kernel: loop5: detected capacity change from 0 to 8 Nov 6 00:18:46.242161 kernel: loop6: detected capacity change from 0 to 128016 Nov 6 00:18:46.110742 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:18:46.129006 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 6 00:18:46.141690 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:18:46.148606 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 6 00:18:46.159702 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:18:46.240815 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Nov 6 00:18:46.240833 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Nov 6 00:18:46.246505 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 6 00:18:46.258236 kernel: loop7: detected capacity change from 0 to 219144 Nov 6 00:18:46.275607 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:18:46.289201 (sd-merge)[1229]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Nov 6 00:18:46.290153 (sd-merge)[1229]: Merged extensions into '/usr'. Nov 6 00:18:46.304542 systemd[1]: Reload requested from client PID 1182 ('systemd-sysext') (unit systemd-sysext.service)... Nov 6 00:18:46.304574 systemd[1]: Reloading... Nov 6 00:18:46.596216 zram_generator::config[1259]: No configuration found. Nov 6 00:18:46.967218 ldconfig[1179]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 6 00:18:47.077333 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 6 00:18:47.077840 systemd[1]: Reloading finished in 772 ms. Nov 6 00:18:47.117107 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 6 00:18:47.119499 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 6 00:18:47.136822 systemd[1]: Starting ensure-sysext.service... Nov 6 00:18:47.143524 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:18:47.207393 systemd[1]: Reload requested from client PID 1302 ('systemctl') (unit ensure-sysext.service)... Nov 6 00:18:47.207426 systemd[1]: Reloading... Nov 6 00:18:47.217326 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 6 00:18:47.217369 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 6 00:18:47.217752 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 6 00:18:47.218122 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 6 00:18:47.225686 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 6 00:18:47.226270 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Nov 6 00:18:47.226370 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Nov 6 00:18:47.240738 systemd-tmpfiles[1303]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:18:47.240761 systemd-tmpfiles[1303]: Skipping /boot Nov 6 00:18:47.286810 systemd-tmpfiles[1303]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:18:47.286834 systemd-tmpfiles[1303]: Skipping /boot Nov 6 00:18:47.420236 zram_generator::config[1342]: No configuration found. Nov 6 00:18:47.687137 systemd[1]: Reloading finished in 478 ms. Nov 6 00:18:47.712823 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 6 00:18:47.725685 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:18:47.737971 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:18:47.744704 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 6 00:18:47.749337 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 6 00:18:47.764927 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:18:47.769102 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:18:47.774696 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 6 00:18:47.781831 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:18:47.782231 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:18:47.786841 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:18:47.790719 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:18:47.794712 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:18:47.803513 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:18:47.803809 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:18:47.804009 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:18:47.815628 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 6 00:18:47.818003 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:18:47.818350 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:18:47.818601 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:18:47.818708 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:18:47.818824 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:18:47.825482 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:18:47.825814 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:18:47.829815 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:18:47.831001 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:18:47.831230 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:18:47.831384 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:18:47.839499 systemd[1]: Finished ensure-sysext.service. Nov 6 00:18:47.852276 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 6 00:18:47.867358 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 6 00:18:47.879553 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 6 00:18:47.892537 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 6 00:18:47.894135 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:18:47.895303 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:18:47.910565 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 00:18:47.915105 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 6 00:18:47.918954 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:18:47.919549 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:18:47.921427 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:18:47.922490 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:18:47.928080 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:18:47.929596 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:18:47.931778 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 6 00:18:47.943017 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:18:47.943160 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:18:47.962216 systemd-udevd[1379]: Using default interface naming scheme 'v255'. Nov 6 00:18:47.995900 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:18:48.003360 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:18:48.004044 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 6 00:18:48.009426 augenrules[1419]: No rules Nov 6 00:18:48.014765 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:18:48.025691 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:18:48.203642 systemd-networkd[1422]: lo: Link UP Nov 6 00:18:48.204552 systemd-networkd[1422]: lo: Gained carrier Nov 6 00:18:48.207322 systemd-networkd[1422]: Enumeration completed Nov 6 00:18:48.207643 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:18:48.217659 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 6 00:18:48.265203 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 6 00:18:48.390724 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 6 00:18:48.407655 systemd-resolved[1378]: Positive Trust Anchors: Nov 6 00:18:48.409734 systemd-resolved[1378]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:18:48.409779 systemd-resolved[1378]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:18:48.440159 systemd-resolved[1378]: Using system hostname 'ci-4459.1.0-n-46450dc2d5'. Nov 6 00:18:48.452030 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 6 00:18:48.455065 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:18:48.456938 systemd[1]: Reached target network.target - Network. Nov 6 00:18:48.459336 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:18:48.460461 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:18:48.462011 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 6 00:18:48.463894 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 6 00:18:48.465462 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 6 00:18:48.467295 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 6 00:18:48.468454 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 6 00:18:48.468654 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:18:48.470684 systemd[1]: Reached target time-set.target - System Time Set. Nov 6 00:18:48.474336 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 6 00:18:48.475789 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 6 00:18:48.478668 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:18:48.483815 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 6 00:18:48.508302 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 6 00:18:48.523744 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 6 00:18:48.524610 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 6 00:18:48.525150 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 6 00:18:48.533242 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 6 00:18:48.534635 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 6 00:18:48.537617 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 6 00:18:48.545586 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:18:48.547111 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:18:48.547659 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:18:48.547696 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:18:48.549743 systemd[1]: Starting containerd.service - containerd container runtime... Nov 6 00:18:48.554347 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 6 00:18:48.560500 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 6 00:18:48.567440 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 6 00:18:48.577960 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 6 00:18:48.582460 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 6 00:18:48.583786 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 00:18:48.595398 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 6 00:18:48.597853 jq[1464]: false Nov 6 00:18:48.609141 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 6 00:18:48.612299 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 6 00:18:48.616032 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 6 00:18:48.620508 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 6 00:18:48.635987 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 6 00:18:48.638592 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 6 00:18:48.644938 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 6 00:18:48.652476 systemd[1]: Starting update-engine.service - Update Engine... Nov 6 00:18:48.655387 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 6 00:18:48.658381 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 6 00:18:48.659228 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 6 00:18:48.659479 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 6 00:18:48.666205 google_oslogin_nss_cache[1466]: oslogin_cache_refresh[1466]: Refreshing passwd entry cache Nov 6 00:18:48.664576 oslogin_cache_refresh[1466]: Refreshing passwd entry cache Nov 6 00:18:48.679482 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 6 00:18:48.680677 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 6 00:18:48.686304 google_oslogin_nss_cache[1466]: oslogin_cache_refresh[1466]: Failure getting users, quitting Nov 6 00:18:48.686304 google_oslogin_nss_cache[1466]: oslogin_cache_refresh[1466]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:18:48.686304 google_oslogin_nss_cache[1466]: oslogin_cache_refresh[1466]: Refreshing group entry cache Nov 6 00:18:48.686304 google_oslogin_nss_cache[1466]: oslogin_cache_refresh[1466]: Failure getting groups, quitting Nov 6 00:18:48.686304 google_oslogin_nss_cache[1466]: oslogin_cache_refresh[1466]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:18:48.683942 oslogin_cache_refresh[1466]: Failure getting users, quitting Nov 6 00:18:48.683973 oslogin_cache_refresh[1466]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:18:48.684036 oslogin_cache_refresh[1466]: Refreshing group entry cache Nov 6 00:18:48.684541 oslogin_cache_refresh[1466]: Failure getting groups, quitting Nov 6 00:18:48.684551 oslogin_cache_refresh[1466]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:18:48.695650 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Nov 6 00:18:48.696048 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 6 00:18:48.697467 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 6 00:18:48.709794 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Nov 6 00:18:48.711344 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 00:18:48.726210 jq[1476]: true Nov 6 00:18:48.742217 tar[1478]: linux-amd64/LICENSE Nov 6 00:18:48.742600 tar[1478]: linux-amd64/helm Nov 6 00:18:48.752651 extend-filesystems[1465]: Found /dev/vda6 Nov 6 00:18:48.754799 dbus-daemon[1462]: [system] SELinux support is enabled Nov 6 00:18:48.755040 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 6 00:18:48.758515 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 6 00:18:48.758551 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 6 00:18:48.759102 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 6 00:18:48.769095 (ntainerd)[1494]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 6 00:18:48.792204 extend-filesystems[1465]: Found /dev/vda9 Nov 6 00:18:48.805999 systemd[1]: motdgen.service: Deactivated successfully. Nov 6 00:18:48.806470 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 6 00:18:48.810378 extend-filesystems[1465]: Checking size of /dev/vda9 Nov 6 00:18:48.838369 update_engine[1475]: I20251106 00:18:48.828616 1475 main.cc:92] Flatcar Update Engine starting Nov 6 00:18:48.838714 jq[1500]: true Nov 6 00:18:48.859209 kernel: ISO 9660 Extensions: RRIP_1991A Nov 6 00:18:48.862208 update_engine[1475]: I20251106 00:18:48.859763 1475 update_check_scheduler.cc:74] Next update check in 3m10s Nov 6 00:18:48.866789 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Nov 6 00:18:48.868772 systemd[1]: Started update-engine.service - Update Engine. Nov 6 00:18:48.871402 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Nov 6 00:18:48.871443 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 6 00:18:48.878021 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 6 00:18:48.880466 coreos-metadata[1461]: Nov 06 00:18:48.879 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 6 00:18:48.885554 coreos-metadata[1461]: Nov 06 00:18:48.885 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) Nov 6 00:18:48.895566 extend-filesystems[1465]: Resized partition /dev/vda9 Nov 6 00:18:48.904205 extend-filesystems[1517]: resize2fs 1.47.3 (8-Jul-2025) Nov 6 00:18:48.920149 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Nov 6 00:18:48.985928 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 00:18:48.988732 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 6 00:18:48.994742 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 6 00:18:49.047276 kernel: mousedev: PS/2 mouse device common for all mice Nov 6 00:18:49.057672 systemd-networkd[1422]: eth1: Configuring with /run/systemd/network/10-be:5d:ad:ab:9f:e6.network. Nov 6 00:18:49.058561 systemd-networkd[1422]: eth1: Link UP Nov 6 00:18:49.058819 systemd-networkd[1422]: eth1: Gained carrier Nov 6 00:18:49.064145 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Nov 6 00:18:49.086137 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Nov 6 00:18:49.100282 bash[1538]: Updated "/home/core/.ssh/authorized_keys" Nov 6 00:18:49.105337 kernel: ACPI: button: Power Button [PWRF] Nov 6 00:18:49.106554 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 6 00:18:49.118804 systemd[1]: Starting sshkeys.service... Nov 6 00:18:49.128334 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 6 00:18:49.134434 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 6 00:18:49.144371 extend-filesystems[1517]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 6 00:18:49.144371 extend-filesystems[1517]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 6 00:18:49.144371 extend-filesystems[1517]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 6 00:18:49.154001 extend-filesystems[1465]: Resized filesystem in /dev/vda9 Nov 6 00:18:49.148262 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 6 00:18:49.148683 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 6 00:18:49.156374 systemd-logind[1474]: New seat seat0. Nov 6 00:18:49.157366 systemd[1]: Started systemd-logind.service - User Login Management. Nov 6 00:18:49.177088 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 6 00:18:49.181775 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 6 00:18:49.244212 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 6 00:18:49.244620 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 6 00:18:49.309209 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Nov 6 00:18:49.309318 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Nov 6 00:18:49.313216 kernel: Console: switching to colour dummy device 80x25 Nov 6 00:18:49.322194 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 6 00:18:49.322294 kernel: [drm] features: -context_init Nov 6 00:18:49.322314 kernel: [drm] number of scanouts: 1 Nov 6 00:18:49.325586 kernel: [drm] number of cap sets: 0 Nov 6 00:18:49.336577 systemd-networkd[1422]: eth0: Configuring with /run/systemd/network/10-02:43:2e:90:1d:4d.network. Nov 6 00:18:49.337505 systemd-networkd[1422]: eth0: Link UP Nov 6 00:18:49.338129 systemd-networkd[1422]: eth0: Gained carrier Nov 6 00:18:49.341327 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Nov 6 00:18:49.350929 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 6 00:18:49.351017 kernel: Console: switching to colour frame buffer device 128x48 Nov 6 00:18:49.357202 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 6 00:18:49.378280 locksmithd[1514]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 6 00:18:49.400208 coreos-metadata[1552]: Nov 06 00:18:49.397 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 6 00:18:49.416228 coreos-metadata[1552]: Nov 06 00:18:49.414 INFO Fetch successful Nov 6 00:18:49.433507 unknown[1552]: wrote ssh authorized keys file for user: core Nov 6 00:18:49.472990 update-ssh-keys[1564]: Updated "/home/core/.ssh/authorized_keys" Nov 6 00:18:49.471982 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 6 00:18:49.476447 systemd[1]: Finished sshkeys.service. Nov 6 00:18:49.532916 containerd[1494]: time="2025-11-06T00:18:49Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 6 00:18:49.536636 containerd[1494]: time="2025-11-06T00:18:49.536567827Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 6 00:18:49.556798 sshd_keygen[1502]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 6 00:18:49.560104 containerd[1494]: time="2025-11-06T00:18:49.559993196Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.539µs" Nov 6 00:18:49.560104 containerd[1494]: time="2025-11-06T00:18:49.560033179Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 6 00:18:49.560104 containerd[1494]: time="2025-11-06T00:18:49.560055169Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 6 00:18:49.560300 containerd[1494]: time="2025-11-06T00:18:49.560260788Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 6 00:18:49.560300 containerd[1494]: time="2025-11-06T00:18:49.560278033Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 6 00:18:49.560357 containerd[1494]: time="2025-11-06T00:18:49.560303808Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:18:49.560394 containerd[1494]: time="2025-11-06T00:18:49.560360531Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:18:49.560394 containerd[1494]: time="2025-11-06T00:18:49.560372070Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:18:49.562288 containerd[1494]: time="2025-11-06T00:18:49.560610120Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:18:49.562288 containerd[1494]: time="2025-11-06T00:18:49.560631241Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:18:49.562288 containerd[1494]: time="2025-11-06T00:18:49.560641742Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:18:49.562288 containerd[1494]: time="2025-11-06T00:18:49.560650180Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 6 00:18:49.562288 containerd[1494]: time="2025-11-06T00:18:49.560747117Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 6 00:18:49.562288 containerd[1494]: time="2025-11-06T00:18:49.560980246Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:18:49.562288 containerd[1494]: time="2025-11-06T00:18:49.561013490Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:18:49.562288 containerd[1494]: time="2025-11-06T00:18:49.561022541Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 6 00:18:49.562288 containerd[1494]: time="2025-11-06T00:18:49.561063745Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 6 00:18:49.565981 containerd[1494]: time="2025-11-06T00:18:49.565620654Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 6 00:18:49.565981 containerd[1494]: time="2025-11-06T00:18:49.565789994Z" level=info msg="metadata content store policy set" policy=shared Nov 6 00:18:49.571464 containerd[1494]: time="2025-11-06T00:18:49.570236464Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 6 00:18:49.571464 containerd[1494]: time="2025-11-06T00:18:49.570324129Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 6 00:18:49.571464 containerd[1494]: time="2025-11-06T00:18:49.570343815Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 6 00:18:49.571464 containerd[1494]: time="2025-11-06T00:18:49.570359571Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 6 00:18:49.571464 containerd[1494]: time="2025-11-06T00:18:49.570388005Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 6 00:18:49.571464 containerd[1494]: time="2025-11-06T00:18:49.570404761Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 6 00:18:49.571464 containerd[1494]: time="2025-11-06T00:18:49.570425628Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 6 00:18:49.571464 containerd[1494]: time="2025-11-06T00:18:49.570446312Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 6 00:18:49.571464 containerd[1494]: time="2025-11-06T00:18:49.570461666Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 6 00:18:49.571464 containerd[1494]: time="2025-11-06T00:18:49.570477520Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 6 00:18:49.571464 containerd[1494]: time="2025-11-06T00:18:49.570489983Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 6 00:18:49.571464 containerd[1494]: time="2025-11-06T00:18:49.570507749Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 6 00:18:49.571464 containerd[1494]: time="2025-11-06T00:18:49.570704107Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 6 00:18:49.571464 containerd[1494]: time="2025-11-06T00:18:49.570729121Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 6 00:18:49.571981 containerd[1494]: time="2025-11-06T00:18:49.570748884Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 6 00:18:49.571981 containerd[1494]: time="2025-11-06T00:18:49.570778382Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 6 00:18:49.571981 containerd[1494]: time="2025-11-06T00:18:49.570793651Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 6 00:18:49.571981 containerd[1494]: time="2025-11-06T00:18:49.570808681Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 6 00:18:49.571981 containerd[1494]: time="2025-11-06T00:18:49.570823080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 6 00:18:49.571981 containerd[1494]: time="2025-11-06T00:18:49.570836076Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 6 00:18:49.571981 containerd[1494]: time="2025-11-06T00:18:49.570852550Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 6 00:18:49.571981 containerd[1494]: time="2025-11-06T00:18:49.570886605Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 6 00:18:49.571981 containerd[1494]: time="2025-11-06T00:18:49.570900157Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 6 00:18:49.571981 containerd[1494]: time="2025-11-06T00:18:49.570983494Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 6 00:18:49.571981 containerd[1494]: time="2025-11-06T00:18:49.571003175Z" level=info msg="Start snapshots syncer" Nov 6 00:18:49.571981 containerd[1494]: time="2025-11-06T00:18:49.571022864Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 6 00:18:49.576194 containerd[1494]: time="2025-11-06T00:18:49.574308111Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 6 00:18:49.579201 containerd[1494]: time="2025-11-06T00:18:49.576566463Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 6 00:18:49.579201 containerd[1494]: time="2025-11-06T00:18:49.576745587Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 6 00:18:49.579201 containerd[1494]: time="2025-11-06T00:18:49.576941929Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 6 00:18:49.579201 containerd[1494]: time="2025-11-06T00:18:49.576970934Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 6 00:18:49.579201 containerd[1494]: time="2025-11-06T00:18:49.577005731Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 6 00:18:49.579201 containerd[1494]: time="2025-11-06T00:18:49.577022837Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 6 00:18:49.579201 containerd[1494]: time="2025-11-06T00:18:49.577042638Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 6 00:18:49.579201 containerd[1494]: time="2025-11-06T00:18:49.577056700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 6 00:18:49.579201 containerd[1494]: time="2025-11-06T00:18:49.577070167Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 6 00:18:49.579201 containerd[1494]: time="2025-11-06T00:18:49.577104076Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 6 00:18:49.579201 containerd[1494]: time="2025-11-06T00:18:49.577118076Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 6 00:18:49.583196 containerd[1494]: time="2025-11-06T00:18:49.581537559Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 6 00:18:49.583196 containerd[1494]: time="2025-11-06T00:18:49.581652476Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:18:49.583196 containerd[1494]: time="2025-11-06T00:18:49.581683795Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:18:49.583196 containerd[1494]: time="2025-11-06T00:18:49.581695401Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:18:49.583196 containerd[1494]: time="2025-11-06T00:18:49.581709113Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:18:49.583196 containerd[1494]: time="2025-11-06T00:18:49.581720479Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 6 00:18:49.583196 containerd[1494]: time="2025-11-06T00:18:49.581733172Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 6 00:18:49.583196 containerd[1494]: time="2025-11-06T00:18:49.581747607Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 6 00:18:49.583196 containerd[1494]: time="2025-11-06T00:18:49.581770534Z" level=info msg="runtime interface created" Nov 6 00:18:49.583196 containerd[1494]: time="2025-11-06T00:18:49.581777588Z" level=info msg="created NRI interface" Nov 6 00:18:49.583196 containerd[1494]: time="2025-11-06T00:18:49.581788460Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 6 00:18:49.583196 containerd[1494]: time="2025-11-06T00:18:49.581808193Z" level=info msg="Connect containerd service" Nov 6 00:18:49.583196 containerd[1494]: time="2025-11-06T00:18:49.581847957Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 6 00:18:49.592428 containerd[1494]: time="2025-11-06T00:18:49.591213365Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 00:18:49.666763 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 6 00:18:49.674139 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 6 00:18:49.717583 systemd[1]: issuegen.service: Deactivated successfully. Nov 6 00:18:49.718337 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 6 00:18:49.726105 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 6 00:18:49.802961 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 6 00:18:49.858035 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 6 00:18:49.864527 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 6 00:18:49.864961 systemd[1]: Reached target getty.target - Login Prompts. Nov 6 00:18:49.883273 coreos-metadata[1461]: Nov 06 00:18:49.883 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 Nov 6 00:18:49.886910 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:18:49.900238 coreos-metadata[1461]: Nov 06 00:18:49.898 INFO Fetch successful Nov 6 00:18:50.002394 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 6 00:18:50.009602 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 6 00:18:50.019358 containerd[1494]: time="2025-11-06T00:18:50.018643417Z" level=info msg="Start subscribing containerd event" Nov 6 00:18:50.019358 containerd[1494]: time="2025-11-06T00:18:50.018716098Z" level=info msg="Start recovering state" Nov 6 00:18:50.019358 containerd[1494]: time="2025-11-06T00:18:50.018840768Z" level=info msg="Start event monitor" Nov 6 00:18:50.019358 containerd[1494]: time="2025-11-06T00:18:50.018857727Z" level=info msg="Start cni network conf syncer for default" Nov 6 00:18:50.019358 containerd[1494]: time="2025-11-06T00:18:50.018864834Z" level=info msg="Start streaming server" Nov 6 00:18:50.019358 containerd[1494]: time="2025-11-06T00:18:50.018876353Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 6 00:18:50.019358 containerd[1494]: time="2025-11-06T00:18:50.018883615Z" level=info msg="runtime interface starting up..." Nov 6 00:18:50.019358 containerd[1494]: time="2025-11-06T00:18:50.018889115Z" level=info msg="starting plugins..." Nov 6 00:18:50.019358 containerd[1494]: time="2025-11-06T00:18:50.018910148Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 6 00:18:50.019729 containerd[1494]: time="2025-11-06T00:18:50.019549578Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 6 00:18:50.019886 containerd[1494]: time="2025-11-06T00:18:50.019659988Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 6 00:18:50.023216 containerd[1494]: time="2025-11-06T00:18:50.019987676Z" level=info msg="containerd successfully booted in 0.487598s" Nov 6 00:18:50.020350 systemd[1]: Started containerd.service - containerd container runtime. Nov 6 00:18:50.051346 systemd-logind[1474]: Watching system buttons on /dev/input/event2 (Power Button) Nov 6 00:18:50.088720 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:18:50.090239 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:18:50.097073 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:18:50.110380 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:18:50.113956 systemd-logind[1474]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 6 00:18:50.194850 tar[1478]: linux-amd64/README.md Nov 6 00:18:50.236634 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:18:50.238359 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:18:50.245552 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:18:50.252775 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:18:50.258933 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 6 00:18:50.311025 kernel: EDAC MC: Ver: 3.0.0 Nov 6 00:18:50.333548 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:18:50.571393 systemd-networkd[1422]: eth0: Gained IPv6LL Nov 6 00:18:50.574508 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 6 00:18:50.578530 systemd[1]: Reached target network-online.target - Network is Online. Nov 6 00:18:50.589569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:18:50.594502 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 6 00:18:50.627879 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 6 00:18:50.891485 systemd-networkd[1422]: eth1: Gained IPv6LL Nov 6 00:18:51.638904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:18:51.642001 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 6 00:18:51.643512 systemd[1]: Startup finished in 3.642s (kernel) + 6.040s (initrd) + 7.010s (userspace) = 16.694s. Nov 6 00:18:51.654607 (kubelet)[1656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:18:52.288368 kubelet[1656]: E1106 00:18:52.288261 1656 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:18:52.292284 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:18:52.292526 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:18:52.293139 systemd[1]: kubelet.service: Consumed 1.227s CPU time, 257.1M memory peak. Nov 6 00:18:52.516600 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 6 00:18:52.518628 systemd[1]: Started sshd@0-64.23.183.231:22-139.178.68.195:42428.service - OpenSSH per-connection server daemon (139.178.68.195:42428). Nov 6 00:18:52.631366 sshd[1668]: Accepted publickey for core from 139.178.68.195 port 42428 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:18:52.633522 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:18:52.643766 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 6 00:18:52.645210 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 6 00:18:52.658124 systemd-logind[1474]: New session 1 of user core. Nov 6 00:18:52.682722 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 6 00:18:52.689727 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 6 00:18:52.703415 (systemd)[1673]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 6 00:18:52.707324 systemd-logind[1474]: New session c1 of user core. Nov 6 00:18:52.901558 systemd[1673]: Queued start job for default target default.target. Nov 6 00:18:52.911583 systemd[1673]: Created slice app.slice - User Application Slice. Nov 6 00:18:52.911617 systemd[1673]: Reached target paths.target - Paths. Nov 6 00:18:52.911667 systemd[1673]: Reached target timers.target - Timers. Nov 6 00:18:52.913289 systemd[1673]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 6 00:18:52.929406 systemd[1673]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 6 00:18:52.929558 systemd[1673]: Reached target sockets.target - Sockets. Nov 6 00:18:52.929616 systemd[1673]: Reached target basic.target - Basic System. Nov 6 00:18:52.929664 systemd[1673]: Reached target default.target - Main User Target. Nov 6 00:18:52.929702 systemd[1673]: Startup finished in 213ms. Nov 6 00:18:52.930083 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 6 00:18:52.938518 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 6 00:18:53.027584 systemd[1]: Started sshd@1-64.23.183.231:22-139.178.68.195:42434.service - OpenSSH per-connection server daemon (139.178.68.195:42434). Nov 6 00:18:53.110465 sshd[1684]: Accepted publickey for core from 139.178.68.195 port 42434 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:18:53.112431 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:18:53.118687 systemd-logind[1474]: New session 2 of user core. Nov 6 00:18:53.123408 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 6 00:18:53.187272 sshd[1687]: Connection closed by 139.178.68.195 port 42434 Nov 6 00:18:53.187277 sshd-session[1684]: pam_unix(sshd:session): session closed for user core Nov 6 00:18:53.201088 systemd[1]: sshd@1-64.23.183.231:22-139.178.68.195:42434.service: Deactivated successfully. Nov 6 00:18:53.203016 systemd[1]: session-2.scope: Deactivated successfully. Nov 6 00:18:53.204609 systemd-logind[1474]: Session 2 logged out. Waiting for processes to exit. Nov 6 00:18:53.207791 systemd-logind[1474]: Removed session 2. Nov 6 00:18:53.210043 systemd[1]: Started sshd@2-64.23.183.231:22-139.178.68.195:42444.service - OpenSSH per-connection server daemon (139.178.68.195:42444). Nov 6 00:18:53.276638 sshd[1693]: Accepted publickey for core from 139.178.68.195 port 42444 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:18:53.278306 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:18:53.283702 systemd-logind[1474]: New session 3 of user core. Nov 6 00:18:53.291826 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 6 00:18:53.347824 sshd[1696]: Connection closed by 139.178.68.195 port 42444 Nov 6 00:18:53.348398 sshd-session[1693]: pam_unix(sshd:session): session closed for user core Nov 6 00:18:53.368968 systemd[1]: sshd@2-64.23.183.231:22-139.178.68.195:42444.service: Deactivated successfully. Nov 6 00:18:53.371565 systemd[1]: session-3.scope: Deactivated successfully. Nov 6 00:18:53.373816 systemd-logind[1474]: Session 3 logged out. Waiting for processes to exit. Nov 6 00:18:53.376578 systemd[1]: Started sshd@3-64.23.183.231:22-139.178.68.195:42448.service - OpenSSH per-connection server daemon (139.178.68.195:42448). Nov 6 00:18:53.378652 systemd-logind[1474]: Removed session 3. Nov 6 00:18:53.443026 sshd[1702]: Accepted publickey for core from 139.178.68.195 port 42448 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:18:53.444856 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:18:53.450332 systemd-logind[1474]: New session 4 of user core. Nov 6 00:18:53.464490 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 6 00:18:53.525144 sshd[1705]: Connection closed by 139.178.68.195 port 42448 Nov 6 00:18:53.526030 sshd-session[1702]: pam_unix(sshd:session): session closed for user core Nov 6 00:18:53.543321 systemd[1]: sshd@3-64.23.183.231:22-139.178.68.195:42448.service: Deactivated successfully. Nov 6 00:18:53.545900 systemd[1]: session-4.scope: Deactivated successfully. Nov 6 00:18:53.546704 systemd-logind[1474]: Session 4 logged out. Waiting for processes to exit. Nov 6 00:18:53.551163 systemd[1]: Started sshd@4-64.23.183.231:22-139.178.68.195:42452.service - OpenSSH per-connection server daemon (139.178.68.195:42452). Nov 6 00:18:53.553442 systemd-logind[1474]: Removed session 4. Nov 6 00:18:53.629076 sshd[1711]: Accepted publickey for core from 139.178.68.195 port 42452 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:18:53.630857 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:18:53.636612 systemd-logind[1474]: New session 5 of user core. Nov 6 00:18:53.642536 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 6 00:18:53.713280 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 6 00:18:53.713703 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:18:53.733259 sudo[1715]: pam_unix(sudo:session): session closed for user root Nov 6 00:18:53.737891 sshd[1714]: Connection closed by 139.178.68.195 port 42452 Nov 6 00:18:53.738476 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Nov 6 00:18:53.750133 systemd[1]: sshd@4-64.23.183.231:22-139.178.68.195:42452.service: Deactivated successfully. Nov 6 00:18:53.752448 systemd[1]: session-5.scope: Deactivated successfully. Nov 6 00:18:53.754279 systemd-logind[1474]: Session 5 logged out. Waiting for processes to exit. Nov 6 00:18:53.757700 systemd[1]: Started sshd@5-64.23.183.231:22-139.178.68.195:42462.service - OpenSSH per-connection server daemon (139.178.68.195:42462). Nov 6 00:18:53.759269 systemd-logind[1474]: Removed session 5. Nov 6 00:18:53.834603 sshd[1721]: Accepted publickey for core from 139.178.68.195 port 42462 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:18:53.836170 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:18:53.842438 systemd-logind[1474]: New session 6 of user core. Nov 6 00:18:53.848465 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 6 00:18:53.907609 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 6 00:18:53.907893 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:18:53.923861 sudo[1726]: pam_unix(sudo:session): session closed for user root Nov 6 00:18:53.930662 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 6 00:18:53.931318 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:18:53.942162 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:18:53.984246 augenrules[1748]: No rules Nov 6 00:18:53.984015 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:18:53.984329 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:18:53.985420 sudo[1725]: pam_unix(sudo:session): session closed for user root Nov 6 00:18:53.989109 sshd[1724]: Connection closed by 139.178.68.195 port 42462 Nov 6 00:18:53.989677 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Nov 6 00:18:54.003327 systemd[1]: sshd@5-64.23.183.231:22-139.178.68.195:42462.service: Deactivated successfully. Nov 6 00:18:54.005309 systemd[1]: session-6.scope: Deactivated successfully. Nov 6 00:18:54.006148 systemd-logind[1474]: Session 6 logged out. Waiting for processes to exit. Nov 6 00:18:54.009530 systemd[1]: Started sshd@6-64.23.183.231:22-139.178.68.195:42472.service - OpenSSH per-connection server daemon (139.178.68.195:42472). Nov 6 00:18:54.010861 systemd-logind[1474]: Removed session 6. Nov 6 00:18:54.073109 sshd[1757]: Accepted publickey for core from 139.178.68.195 port 42472 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:18:54.074620 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:18:54.080591 systemd-logind[1474]: New session 7 of user core. Nov 6 00:18:54.094442 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 6 00:18:54.153902 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 6 00:18:54.154668 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:18:54.646613 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 6 00:18:54.658678 (dockerd)[1779]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 6 00:18:55.018802 dockerd[1779]: time="2025-11-06T00:18:55.018304535Z" level=info msg="Starting up" Nov 6 00:18:55.019779 dockerd[1779]: time="2025-11-06T00:18:55.019750702Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 6 00:18:55.037144 dockerd[1779]: time="2025-11-06T00:18:55.037043826Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 6 00:18:55.054952 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3171570359-merged.mount: Deactivated successfully. Nov 6 00:18:55.113826 dockerd[1779]: time="2025-11-06T00:18:55.113749131Z" level=info msg="Loading containers: start." Nov 6 00:18:55.126336 kernel: Initializing XFRM netlink socket Nov 6 00:18:55.436574 systemd-networkd[1422]: docker0: Link UP Nov 6 00:18:55.446964 dockerd[1779]: time="2025-11-06T00:18:55.446865961Z" level=info msg="Loading containers: done." Nov 6 00:18:55.469880 dockerd[1779]: time="2025-11-06T00:18:55.469806022Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 6 00:18:55.470118 dockerd[1779]: time="2025-11-06T00:18:55.469927823Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 6 00:18:55.470118 dockerd[1779]: time="2025-11-06T00:18:55.470054283Z" level=info msg="Initializing buildkit" Nov 6 00:18:55.505120 dockerd[1779]: time="2025-11-06T00:18:55.505047392Z" level=info msg="Completed buildkit initialization" Nov 6 00:18:55.514375 dockerd[1779]: time="2025-11-06T00:18:55.514282216Z" level=info msg="Daemon has completed initialization" Nov 6 00:18:55.514542 dockerd[1779]: time="2025-11-06T00:18:55.514468051Z" level=info msg="API listen on /run/docker.sock" Nov 6 00:18:55.515610 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 6 00:18:56.462434 systemd-resolved[1378]: Clock change detected. Flushing caches. Nov 6 00:18:56.463028 systemd-timesyncd[1391]: Contacted time server 216.31.17.12:123 (1.flatcar.pool.ntp.org). Nov 6 00:18:56.463109 systemd-timesyncd[1391]: Initial clock synchronization to Thu 2025-11-06 00:18:56.462192 UTC. Nov 6 00:18:56.955747 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4075794529-merged.mount: Deactivated successfully. Nov 6 00:18:57.191598 containerd[1494]: time="2025-11-06T00:18:57.191452804Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 6 00:18:57.875419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3093484807.mount: Deactivated successfully. Nov 6 00:18:59.081067 containerd[1494]: time="2025-11-06T00:18:59.080990097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:18:59.083432 containerd[1494]: time="2025-11-06T00:18:59.083081612Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Nov 6 00:18:59.084298 containerd[1494]: time="2025-11-06T00:18:59.084250289Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:18:59.091717 containerd[1494]: time="2025-11-06T00:18:59.091651745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:18:59.093588 containerd[1494]: time="2025-11-06T00:18:59.093530324Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.90115895s" Nov 6 00:18:59.094009 containerd[1494]: time="2025-11-06T00:18:59.093789548Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 6 00:18:59.094714 containerd[1494]: time="2025-11-06T00:18:59.094598976Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 6 00:19:00.430209 containerd[1494]: time="2025-11-06T00:19:00.430144276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:00.431396 containerd[1494]: time="2025-11-06T00:19:00.431363894Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Nov 6 00:19:00.431797 containerd[1494]: time="2025-11-06T00:19:00.431768669Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:00.434370 containerd[1494]: time="2025-11-06T00:19:00.434316734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:00.436161 containerd[1494]: time="2025-11-06T00:19:00.435521123Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.340673969s" Nov 6 00:19:00.436161 containerd[1494]: time="2025-11-06T00:19:00.435564617Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 6 00:19:00.436161 containerd[1494]: time="2025-11-06T00:19:00.436112554Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 6 00:19:01.470163 containerd[1494]: time="2025-11-06T00:19:01.470095225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:01.471753 containerd[1494]: time="2025-11-06T00:19:01.471700660Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Nov 6 00:19:01.472147 containerd[1494]: time="2025-11-06T00:19:01.472117374Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:01.476124 containerd[1494]: time="2025-11-06T00:19:01.475075514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:01.476543 containerd[1494]: time="2025-11-06T00:19:01.476057865Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.039915741s" Nov 6 00:19:01.476627 containerd[1494]: time="2025-11-06T00:19:01.476546493Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 6 00:19:01.477459 containerd[1494]: time="2025-11-06T00:19:01.477111803Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 6 00:19:02.705986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4238658142.mount: Deactivated successfully. Nov 6 00:19:03.160040 containerd[1494]: time="2025-11-06T00:19:03.159515961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:03.160591 containerd[1494]: time="2025-11-06T00:19:03.160420963Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Nov 6 00:19:03.161699 containerd[1494]: time="2025-11-06T00:19:03.161623174Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:03.163940 containerd[1494]: time="2025-11-06T00:19:03.163867055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:03.164772 containerd[1494]: time="2025-11-06T00:19:03.164603748Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.687388909s" Nov 6 00:19:03.164772 containerd[1494]: time="2025-11-06T00:19:03.164644751Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 6 00:19:03.165663 containerd[1494]: time="2025-11-06T00:19:03.165607015Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 6 00:19:03.167181 systemd-resolved[1378]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Nov 6 00:19:03.450385 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 6 00:19:03.453215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:19:03.639407 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:19:03.651016 (kubelet)[2078]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:19:03.687924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1417402256.mount: Deactivated successfully. Nov 6 00:19:03.732443 kubelet[2078]: E1106 00:19:03.732211 2078 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:19:03.739788 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:19:03.739982 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:19:03.741066 systemd[1]: kubelet.service: Consumed 200ms CPU time, 110.3M memory peak. Nov 6 00:19:04.646847 containerd[1494]: time="2025-11-06T00:19:04.645704315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:04.646847 containerd[1494]: time="2025-11-06T00:19:04.646599561Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Nov 6 00:19:04.646847 containerd[1494]: time="2025-11-06T00:19:04.646784836Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:04.650318 containerd[1494]: time="2025-11-06T00:19:04.650263695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:04.651732 containerd[1494]: time="2025-11-06T00:19:04.651674745Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.485868141s" Nov 6 00:19:04.652083 containerd[1494]: time="2025-11-06T00:19:04.651737033Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 6 00:19:04.652514 containerd[1494]: time="2025-11-06T00:19:04.652489159Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 6 00:19:05.280266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2482702691.mount: Deactivated successfully. Nov 6 00:19:05.284305 containerd[1494]: time="2025-11-06T00:19:05.284246066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:05.285295 containerd[1494]: time="2025-11-06T00:19:05.285250127Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Nov 6 00:19:05.286409 containerd[1494]: time="2025-11-06T00:19:05.286360550Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:05.287822 containerd[1494]: time="2025-11-06T00:19:05.287763436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:05.289196 containerd[1494]: time="2025-11-06T00:19:05.288657345Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 636.136957ms" Nov 6 00:19:05.289196 containerd[1494]: time="2025-11-06T00:19:05.288692971Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 6 00:19:05.290209 containerd[1494]: time="2025-11-06T00:19:05.290189924Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 6 00:19:06.262700 systemd-resolved[1378]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Nov 6 00:19:08.088090 containerd[1494]: time="2025-11-06T00:19:08.088004274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:08.089509 containerd[1494]: time="2025-11-06T00:19:08.089293695Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Nov 6 00:19:08.090067 containerd[1494]: time="2025-11-06T00:19:08.090035760Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:08.093414 containerd[1494]: time="2025-11-06T00:19:08.093358230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:08.094661 containerd[1494]: time="2025-11-06T00:19:08.094621684Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.80415981s" Nov 6 00:19:08.094819 containerd[1494]: time="2025-11-06T00:19:08.094802549Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 6 00:19:11.951201 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:19:11.951548 systemd[1]: kubelet.service: Consumed 200ms CPU time, 110.3M memory peak. Nov 6 00:19:11.954697 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:19:11.994917 systemd[1]: Reload requested from client PID 2212 ('systemctl') (unit session-7.scope)... Nov 6 00:19:11.994943 systemd[1]: Reloading... Nov 6 00:19:12.138508 zram_generator::config[2257]: No configuration found. Nov 6 00:19:12.364129 systemd[1]: Reloading finished in 368 ms. Nov 6 00:19:12.430566 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 6 00:19:12.430666 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 6 00:19:12.431046 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:19:12.431105 systemd[1]: kubelet.service: Consumed 126ms CPU time, 98.2M memory peak. Nov 6 00:19:12.433001 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:19:12.609349 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:19:12.620792 (kubelet)[2309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:19:12.677721 kubelet[2309]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:19:12.677721 kubelet[2309]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:19:12.678268 kubelet[2309]: I1106 00:19:12.677775 2309 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:19:13.334807 kubelet[2309]: I1106 00:19:13.334723 2309 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 6 00:19:13.334807 kubelet[2309]: I1106 00:19:13.334783 2309 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:19:13.338933 kubelet[2309]: I1106 00:19:13.338866 2309 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 6 00:19:13.338933 kubelet[2309]: I1106 00:19:13.338934 2309 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:19:13.340092 kubelet[2309]: I1106 00:19:13.340052 2309 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 00:19:13.350395 kubelet[2309]: E1106 00:19:13.350351 2309 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://64.23.183.231:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.23.183.231:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 00:19:13.352230 kubelet[2309]: I1106 00:19:13.352197 2309 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:19:13.359031 kubelet[2309]: I1106 00:19:13.358991 2309 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:19:13.366842 kubelet[2309]: I1106 00:19:13.366780 2309 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 6 00:19:13.368040 kubelet[2309]: I1106 00:19:13.367768 2309 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:19:13.369623 kubelet[2309]: I1106 00:19:13.367806 2309 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.1.0-n-46450dc2d5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:19:13.370352 kubelet[2309]: I1106 00:19:13.369885 2309 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:19:13.370352 kubelet[2309]: I1106 00:19:13.369917 2309 container_manager_linux.go:306] "Creating device plugin manager" Nov 6 00:19:13.370352 kubelet[2309]: I1106 00:19:13.370096 2309 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 6 00:19:13.372226 kubelet[2309]: I1106 00:19:13.372205 2309 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:19:13.372809 kubelet[2309]: I1106 00:19:13.372792 2309 kubelet.go:475] "Attempting to sync node with API server" Nov 6 00:19:13.373318 kubelet[2309]: I1106 00:19:13.373302 2309 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:19:13.373476 kubelet[2309]: I1106 00:19:13.373455 2309 kubelet.go:387] "Adding apiserver pod source" Nov 6 00:19:13.373566 kubelet[2309]: E1106 00:19:13.373530 2309 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://64.23.183.231:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-n-46450dc2d5&limit=500&resourceVersion=0\": dial tcp 64.23.183.231:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 00:19:13.375009 kubelet[2309]: I1106 00:19:13.374590 2309 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:19:13.378746 kubelet[2309]: E1106 00:19:13.378714 2309 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://64.23.183.231:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.183.231:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 00:19:13.379896 kubelet[2309]: I1106 00:19:13.379873 2309 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:19:13.380680 kubelet[2309]: I1106 00:19:13.380656 2309 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 00:19:13.380790 kubelet[2309]: I1106 00:19:13.380782 2309 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 6 00:19:13.380895 kubelet[2309]: W1106 00:19:13.380883 2309 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 6 00:19:13.385665 kubelet[2309]: I1106 00:19:13.385634 2309 server.go:1262] "Started kubelet" Nov 6 00:19:13.391786 kubelet[2309]: I1106 00:19:13.391506 2309 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:19:13.398262 kubelet[2309]: E1106 00:19:13.395876 2309 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.183.231:6443/api/v1/namespaces/default/events\": dial tcp 64.23.183.231:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.1.0-n-46450dc2d5.187542deafd565a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.1.0-n-46450dc2d5,UID:ci-4459.1.0-n-46450dc2d5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.1.0-n-46450dc2d5,},FirstTimestamp:2025-11-06 00:19:13.385588131 +0000 UTC m=+0.759484421,LastTimestamp:2025-11-06 00:19:13.385588131 +0000 UTC m=+0.759484421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.1.0-n-46450dc2d5,}" Nov 6 00:19:13.401708 kubelet[2309]: I1106 00:19:13.401273 2309 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 6 00:19:13.402160 kubelet[2309]: E1106 00:19:13.402101 2309 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-46450dc2d5\" not found" Nov 6 00:19:13.403759 kubelet[2309]: I1106 00:19:13.402500 2309 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 6 00:19:13.403759 kubelet[2309]: I1106 00:19:13.402578 2309 reconciler.go:29] "Reconciler: start to sync state" Nov 6 00:19:13.405697 kubelet[2309]: I1106 00:19:13.404822 2309 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:19:13.406821 kubelet[2309]: I1106 00:19:13.406785 2309 server.go:310] "Adding debug handlers to kubelet server" Nov 6 00:19:13.411509 kubelet[2309]: I1106 00:19:13.411381 2309 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:19:13.411509 kubelet[2309]: I1106 00:19:13.411453 2309 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 6 00:19:13.411888 kubelet[2309]: I1106 00:19:13.411871 2309 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:19:13.413323 kubelet[2309]: I1106 00:19:13.413295 2309 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:19:13.414157 kubelet[2309]: E1106 00:19:13.414129 2309 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://64.23.183.231:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.183.231:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 00:19:13.414378 kubelet[2309]: E1106 00:19:13.414340 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.183.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-46450dc2d5?timeout=10s\": dial tcp 64.23.183.231:6443: connect: connection refused" interval="200ms" Nov 6 00:19:13.415655 kubelet[2309]: I1106 00:19:13.415630 2309 factory.go:223] Registration of the systemd container factory successfully Nov 6 00:19:13.415830 kubelet[2309]: I1106 00:19:13.415813 2309 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:19:13.417044 kubelet[2309]: E1106 00:19:13.417021 2309 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 00:19:13.417818 kubelet[2309]: I1106 00:19:13.417801 2309 factory.go:223] Registration of the containerd container factory successfully Nov 6 00:19:13.428615 kubelet[2309]: I1106 00:19:13.428554 2309 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 6 00:19:13.430097 kubelet[2309]: I1106 00:19:13.430068 2309 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 6 00:19:13.430097 kubelet[2309]: I1106 00:19:13.430097 2309 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 6 00:19:13.430174 kubelet[2309]: I1106 00:19:13.430135 2309 kubelet.go:2427] "Starting kubelet main sync loop" Nov 6 00:19:13.431098 kubelet[2309]: E1106 00:19:13.430188 2309 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:19:13.441175 kubelet[2309]: E1106 00:19:13.441120 2309 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://64.23.183.231:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.183.231:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 00:19:13.448495 kubelet[2309]: I1106 00:19:13.448352 2309 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:19:13.448495 kubelet[2309]: I1106 00:19:13.448370 2309 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:19:13.448495 kubelet[2309]: I1106 00:19:13.448395 2309 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:19:13.450314 kubelet[2309]: I1106 00:19:13.450290 2309 policy_none.go:49] "None policy: Start" Nov 6 00:19:13.450679 kubelet[2309]: I1106 00:19:13.450445 2309 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 6 00:19:13.450679 kubelet[2309]: I1106 00:19:13.450462 2309 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 6 00:19:13.451376 kubelet[2309]: I1106 00:19:13.451363 2309 policy_none.go:47] "Start" Nov 6 00:19:13.458375 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 6 00:19:13.471163 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 6 00:19:13.475859 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 6 00:19:13.486740 kubelet[2309]: E1106 00:19:13.486698 2309 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 00:19:13.487404 kubelet[2309]: I1106 00:19:13.487353 2309 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:19:13.487404 kubelet[2309]: I1106 00:19:13.487376 2309 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:19:13.487837 kubelet[2309]: I1106 00:19:13.487742 2309 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:19:13.492201 kubelet[2309]: E1106 00:19:13.492142 2309 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:19:13.492201 kubelet[2309]: E1106 00:19:13.492192 2309 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.1.0-n-46450dc2d5\" not found" Nov 6 00:19:13.543557 systemd[1]: Created slice kubepods-burstable-pod213fd8ba6ea8403664a7acd7de330ecf.slice - libcontainer container kubepods-burstable-pod213fd8ba6ea8403664a7acd7de330ecf.slice. Nov 6 00:19:13.555693 kubelet[2309]: E1106 00:19:13.555492 2309 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-46450dc2d5\" not found" node="ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:13.561040 systemd[1]: Created slice kubepods-burstable-pod5fea9a4170d304812a0652a89d72f574.slice - libcontainer container kubepods-burstable-pod5fea9a4170d304812a0652a89d72f574.slice. Nov 6 00:19:13.566451 kubelet[2309]: E1106 00:19:13.566415 2309 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-46450dc2d5\" not found" node="ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:13.569921 systemd[1]: Created slice kubepods-burstable-pod2372855eb8e07e146e4ed057116d43c0.slice - libcontainer container kubepods-burstable-pod2372855eb8e07e146e4ed057116d43c0.slice. Nov 6 00:19:13.573738 kubelet[2309]: E1106 00:19:13.573703 2309 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-46450dc2d5\" not found" node="ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:13.589650 kubelet[2309]: I1106 00:19:13.589395 2309 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:13.591940 kubelet[2309]: E1106 00:19:13.591895 2309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.183.231:6443/api/v1/nodes\": dial tcp 64.23.183.231:6443: connect: connection refused" node="ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:13.603541 kubelet[2309]: I1106 00:19:13.603480 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5fea9a4170d304812a0652a89d72f574-ca-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-46450dc2d5\" (UID: \"5fea9a4170d304812a0652a89d72f574\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:13.603541 kubelet[2309]: I1106 00:19:13.603528 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5fea9a4170d304812a0652a89d72f574-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.1.0-n-46450dc2d5\" (UID: \"5fea9a4170d304812a0652a89d72f574\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:13.603541 kubelet[2309]: I1106 00:19:13.603553 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2372855eb8e07e146e4ed057116d43c0-kubeconfig\") pod \"kube-scheduler-ci-4459.1.0-n-46450dc2d5\" (UID: \"2372855eb8e07e146e4ed057116d43c0\") " pod="kube-system/kube-scheduler-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:13.603956 kubelet[2309]: I1106 00:19:13.603572 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5fea9a4170d304812a0652a89d72f574-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.1.0-n-46450dc2d5\" (UID: \"5fea9a4170d304812a0652a89d72f574\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:13.603956 kubelet[2309]: I1106 00:19:13.603589 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5fea9a4170d304812a0652a89d72f574-k8s-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-46450dc2d5\" (UID: \"5fea9a4170d304812a0652a89d72f574\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:13.603956 kubelet[2309]: I1106 00:19:13.603637 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5fea9a4170d304812a0652a89d72f574-kubeconfig\") pod \"kube-controller-manager-ci-4459.1.0-n-46450dc2d5\" (UID: \"5fea9a4170d304812a0652a89d72f574\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:13.603956 kubelet[2309]: I1106 00:19:13.603672 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/213fd8ba6ea8403664a7acd7de330ecf-ca-certs\") pod \"kube-apiserver-ci-4459.1.0-n-46450dc2d5\" (UID: \"213fd8ba6ea8403664a7acd7de330ecf\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:13.603956 kubelet[2309]: I1106 00:19:13.603688 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/213fd8ba6ea8403664a7acd7de330ecf-k8s-certs\") pod \"kube-apiserver-ci-4459.1.0-n-46450dc2d5\" (UID: \"213fd8ba6ea8403664a7acd7de330ecf\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:13.604098 kubelet[2309]: I1106 00:19:13.603703 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/213fd8ba6ea8403664a7acd7de330ecf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.1.0-n-46450dc2d5\" (UID: \"213fd8ba6ea8403664a7acd7de330ecf\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:13.615035 kubelet[2309]: E1106 00:19:13.614972 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.183.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-46450dc2d5?timeout=10s\": dial tcp 64.23.183.231:6443: connect: connection refused" interval="400ms" Nov 6 00:19:13.794099 kubelet[2309]: I1106 00:19:13.794026 2309 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:13.795175 kubelet[2309]: E1106 00:19:13.795135 2309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.183.231:6443/api/v1/nodes\": dial tcp 64.23.183.231:6443: connect: connection refused" node="ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:13.858342 kubelet[2309]: E1106 00:19:13.858210 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:13.859381 containerd[1494]: time="2025-11-06T00:19:13.859317335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.1.0-n-46450dc2d5,Uid:213fd8ba6ea8403664a7acd7de330ecf,Namespace:kube-system,Attempt:0,}" Nov 6 00:19:13.865418 systemd-resolved[1378]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Nov 6 00:19:13.869961 kubelet[2309]: E1106 00:19:13.869673 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:13.870276 containerd[1494]: time="2025-11-06T00:19:13.870227624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.1.0-n-46450dc2d5,Uid:5fea9a4170d304812a0652a89d72f574,Namespace:kube-system,Attempt:0,}" Nov 6 00:19:13.876001 kubelet[2309]: E1106 00:19:13.875688 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:13.876289 containerd[1494]: time="2025-11-06T00:19:13.876235527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.1.0-n-46450dc2d5,Uid:2372855eb8e07e146e4ed057116d43c0,Namespace:kube-system,Attempt:0,}" Nov 6 00:19:14.015550 kubelet[2309]: E1106 00:19:14.015488 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.183.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-46450dc2d5?timeout=10s\": dial tcp 64.23.183.231:6443: connect: connection refused" interval="800ms" Nov 6 00:19:14.184025 kubelet[2309]: E1106 00:19:14.183887 2309 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://64.23.183.231:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-n-46450dc2d5&limit=500&resourceVersion=0\": dial tcp 64.23.183.231:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 00:19:14.197490 kubelet[2309]: I1106 00:19:14.197429 2309 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:14.198045 kubelet[2309]: E1106 00:19:14.198011 2309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.183.231:6443/api/v1/nodes\": dial tcp 64.23.183.231:6443: connect: connection refused" node="ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:14.318451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount996866725.mount: Deactivated successfully. Nov 6 00:19:14.324963 containerd[1494]: time="2025-11-06T00:19:14.324231783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:19:14.327356 containerd[1494]: time="2025-11-06T00:19:14.327300850Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 6 00:19:14.327906 containerd[1494]: time="2025-11-06T00:19:14.327863267Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:19:14.329051 containerd[1494]: time="2025-11-06T00:19:14.329017494Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:19:14.329789 containerd[1494]: time="2025-11-06T00:19:14.329761452Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 6 00:19:14.330434 containerd[1494]: time="2025-11-06T00:19:14.330401936Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 6 00:19:14.333367 containerd[1494]: time="2025-11-06T00:19:14.333334837Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:19:14.334947 containerd[1494]: time="2025-11-06T00:19:14.334911388Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 463.5059ms" Nov 6 00:19:14.336817 containerd[1494]: time="2025-11-06T00:19:14.336629258Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 458.779527ms" Nov 6 00:19:14.336993 containerd[1494]: time="2025-11-06T00:19:14.336961884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:19:14.346341 containerd[1494]: time="2025-11-06T00:19:14.345919949Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 481.839483ms" Nov 6 00:19:14.456957 containerd[1494]: time="2025-11-06T00:19:14.456819422Z" level=info msg="connecting to shim 24a87e02b6415159182f676d82bf2093f8cac75eff44223719fb2fa910bc0c38" address="unix:///run/containerd/s/3dda72c0ed1b01eca9b706b5b0e126741520f3bd719ce076f0173d6f9729d4fd" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:19:14.460534 containerd[1494]: time="2025-11-06T00:19:14.457572060Z" level=info msg="connecting to shim b2980ab09d12dd15664fc979a055c236d1e49ce12f3c42c5f4e462769f78d170" address="unix:///run/containerd/s/96cef658db5e7664c5750b418cecc12bc126e5ff3792be8bbe5b6d7a9316aabf" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:19:14.471491 containerd[1494]: time="2025-11-06T00:19:14.471385183Z" level=info msg="connecting to shim 180488e0dba0ed70a2c3c4872b875b74d583d10f4dd3bef21f371bf0a1be962f" address="unix:///run/containerd/s/45bb34da365d687961376450a750232303a6371cfef78a9779bd3e912a1f6adb" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:19:14.517510 kubelet[2309]: E1106 00:19:14.517422 2309 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://64.23.183.231:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.183.231:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 00:19:14.576775 systemd[1]: Started cri-containerd-180488e0dba0ed70a2c3c4872b875b74d583d10f4dd3bef21f371bf0a1be962f.scope - libcontainer container 180488e0dba0ed70a2c3c4872b875b74d583d10f4dd3bef21f371bf0a1be962f. Nov 6 00:19:14.579015 systemd[1]: Started cri-containerd-24a87e02b6415159182f676d82bf2093f8cac75eff44223719fb2fa910bc0c38.scope - libcontainer container 24a87e02b6415159182f676d82bf2093f8cac75eff44223719fb2fa910bc0c38. Nov 6 00:19:14.586870 systemd[1]: Started cri-containerd-b2980ab09d12dd15664fc979a055c236d1e49ce12f3c42c5f4e462769f78d170.scope - libcontainer container b2980ab09d12dd15664fc979a055c236d1e49ce12f3c42c5f4e462769f78d170. Nov 6 00:19:14.601514 kubelet[2309]: E1106 00:19:14.601353 2309 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.183.231:6443/api/v1/namespaces/default/events\": dial tcp 64.23.183.231:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.1.0-n-46450dc2d5.187542deafd565a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.1.0-n-46450dc2d5,UID:ci-4459.1.0-n-46450dc2d5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.1.0-n-46450dc2d5,},FirstTimestamp:2025-11-06 00:19:13.385588131 +0000 UTC m=+0.759484421,LastTimestamp:2025-11-06 00:19:13.385588131 +0000 UTC m=+0.759484421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.1.0-n-46450dc2d5,}" Nov 6 00:19:14.693567 containerd[1494]: time="2025-11-06T00:19:14.693501114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.1.0-n-46450dc2d5,Uid:5fea9a4170d304812a0652a89d72f574,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2980ab09d12dd15664fc979a055c236d1e49ce12f3c42c5f4e462769f78d170\"" Nov 6 00:19:14.699188 kubelet[2309]: E1106 00:19:14.699119 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:14.707647 containerd[1494]: time="2025-11-06T00:19:14.707177368Z" level=info msg="CreateContainer within sandbox \"b2980ab09d12dd15664fc979a055c236d1e49ce12f3c42c5f4e462769f78d170\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 6 00:19:14.712527 containerd[1494]: time="2025-11-06T00:19:14.712133143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.1.0-n-46450dc2d5,Uid:213fd8ba6ea8403664a7acd7de330ecf,Namespace:kube-system,Attempt:0,} returns sandbox id \"24a87e02b6415159182f676d82bf2093f8cac75eff44223719fb2fa910bc0c38\"" Nov 6 00:19:14.716013 kubelet[2309]: E1106 00:19:14.715062 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:14.722713 containerd[1494]: time="2025-11-06T00:19:14.722654917Z" level=info msg="CreateContainer within sandbox \"24a87e02b6415159182f676d82bf2093f8cac75eff44223719fb2fa910bc0c38\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 6 00:19:14.729773 containerd[1494]: time="2025-11-06T00:19:14.729726154Z" level=info msg="Container 7f96be68dbf59d7271f6e8ae9d369f7c068a82d10773db94e881ae762504af7a: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:19:14.745450 containerd[1494]: time="2025-11-06T00:19:14.745405672Z" level=info msg="Container 402eab3476eba24e00620e60e715b93efe374bdad8425e63c436be696cb37f04: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:19:14.750937 containerd[1494]: time="2025-11-06T00:19:14.750803190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.1.0-n-46450dc2d5,Uid:2372855eb8e07e146e4ed057116d43c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"180488e0dba0ed70a2c3c4872b875b74d583d10f4dd3bef21f371bf0a1be962f\"" Nov 6 00:19:14.751917 kubelet[2309]: E1106 00:19:14.751888 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:14.757512 containerd[1494]: time="2025-11-06T00:19:14.757288814Z" level=info msg="CreateContainer within sandbox \"b2980ab09d12dd15664fc979a055c236d1e49ce12f3c42c5f4e462769f78d170\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7f96be68dbf59d7271f6e8ae9d369f7c068a82d10773db94e881ae762504af7a\"" Nov 6 00:19:14.757512 containerd[1494]: time="2025-11-06T00:19:14.757431967Z" level=info msg="CreateContainer within sandbox \"24a87e02b6415159182f676d82bf2093f8cac75eff44223719fb2fa910bc0c38\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"402eab3476eba24e00620e60e715b93efe374bdad8425e63c436be696cb37f04\"" Nov 6 00:19:14.757512 containerd[1494]: time="2025-11-06T00:19:14.757487305Z" level=info msg="CreateContainer within sandbox \"180488e0dba0ed70a2c3c4872b875b74d583d10f4dd3bef21f371bf0a1be962f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 6 00:19:14.758787 containerd[1494]: time="2025-11-06T00:19:14.758342398Z" level=info msg="StartContainer for \"402eab3476eba24e00620e60e715b93efe374bdad8425e63c436be696cb37f04\"" Nov 6 00:19:14.760784 containerd[1494]: time="2025-11-06T00:19:14.760747348Z" level=info msg="connecting to shim 402eab3476eba24e00620e60e715b93efe374bdad8425e63c436be696cb37f04" address="unix:///run/containerd/s/3dda72c0ed1b01eca9b706b5b0e126741520f3bd719ce076f0173d6f9729d4fd" protocol=ttrpc version=3 Nov 6 00:19:14.761344 containerd[1494]: time="2025-11-06T00:19:14.761313575Z" level=info msg="StartContainer for \"7f96be68dbf59d7271f6e8ae9d369f7c068a82d10773db94e881ae762504af7a\"" Nov 6 00:19:14.763879 containerd[1494]: time="2025-11-06T00:19:14.763797745Z" level=info msg="connecting to shim 7f96be68dbf59d7271f6e8ae9d369f7c068a82d10773db94e881ae762504af7a" address="unix:///run/containerd/s/96cef658db5e7664c5750b418cecc12bc126e5ff3792be8bbe5b6d7a9316aabf" protocol=ttrpc version=3 Nov 6 00:19:14.766839 containerd[1494]: time="2025-11-06T00:19:14.766773804Z" level=info msg="Container 4dc06eeb75bba103d4911dbb9009b84b83bd71b22ba37462744aa396c0a0c289: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:19:14.778127 containerd[1494]: time="2025-11-06T00:19:14.778062171Z" level=info msg="CreateContainer within sandbox \"180488e0dba0ed70a2c3c4872b875b74d583d10f4dd3bef21f371bf0a1be962f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4dc06eeb75bba103d4911dbb9009b84b83bd71b22ba37462744aa396c0a0c289\"" Nov 6 00:19:14.779825 containerd[1494]: time="2025-11-06T00:19:14.779688850Z" level=info msg="StartContainer for \"4dc06eeb75bba103d4911dbb9009b84b83bd71b22ba37462744aa396c0a0c289\"" Nov 6 00:19:14.781206 containerd[1494]: time="2025-11-06T00:19:14.781159447Z" level=info msg="connecting to shim 4dc06eeb75bba103d4911dbb9009b84b83bd71b22ba37462744aa396c0a0c289" address="unix:///run/containerd/s/45bb34da365d687961376450a750232303a6371cfef78a9779bd3e912a1f6adb" protocol=ttrpc version=3 Nov 6 00:19:14.793772 systemd[1]: Started cri-containerd-7f96be68dbf59d7271f6e8ae9d369f7c068a82d10773db94e881ae762504af7a.scope - libcontainer container 7f96be68dbf59d7271f6e8ae9d369f7c068a82d10773db94e881ae762504af7a. Nov 6 00:19:14.802822 systemd[1]: Started cri-containerd-402eab3476eba24e00620e60e715b93efe374bdad8425e63c436be696cb37f04.scope - libcontainer container 402eab3476eba24e00620e60e715b93efe374bdad8425e63c436be696cb37f04. Nov 6 00:19:14.816301 kubelet[2309]: E1106 00:19:14.816136 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.183.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-46450dc2d5?timeout=10s\": dial tcp 64.23.183.231:6443: connect: connection refused" interval="1.6s" Nov 6 00:19:14.832969 systemd[1]: Started cri-containerd-4dc06eeb75bba103d4911dbb9009b84b83bd71b22ba37462744aa396c0a0c289.scope - libcontainer container 4dc06eeb75bba103d4911dbb9009b84b83bd71b22ba37462744aa396c0a0c289. Nov 6 00:19:14.869066 kubelet[2309]: E1106 00:19:14.868911 2309 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://64.23.183.231:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.183.231:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 00:19:14.870788 kubelet[2309]: E1106 00:19:14.870710 2309 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://64.23.183.231:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.183.231:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 00:19:14.921640 containerd[1494]: time="2025-11-06T00:19:14.921302173Z" level=info msg="StartContainer for \"7f96be68dbf59d7271f6e8ae9d369f7c068a82d10773db94e881ae762504af7a\" returns successfully" Nov 6 00:19:14.943234 containerd[1494]: time="2025-11-06T00:19:14.943181834Z" level=info msg="StartContainer for \"402eab3476eba24e00620e60e715b93efe374bdad8425e63c436be696cb37f04\" returns successfully" Nov 6 00:19:14.966644 containerd[1494]: time="2025-11-06T00:19:14.965215407Z" level=info msg="StartContainer for \"4dc06eeb75bba103d4911dbb9009b84b83bd71b22ba37462744aa396c0a0c289\" returns successfully" Nov 6 00:19:15.001498 kubelet[2309]: I1106 00:19:15.001064 2309 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:15.002199 kubelet[2309]: E1106 00:19:15.002156 2309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.183.231:6443/api/v1/nodes\": dial tcp 64.23.183.231:6443: connect: connection refused" node="ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:15.471045 kubelet[2309]: E1106 00:19:15.470462 2309 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-46450dc2d5\" not found" node="ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:15.471409 kubelet[2309]: E1106 00:19:15.471314 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:15.474410 kubelet[2309]: E1106 00:19:15.474371 2309 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-46450dc2d5\" not found" node="ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:15.475919 kubelet[2309]: E1106 00:19:15.475892 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:15.477870 kubelet[2309]: E1106 00:19:15.477840 2309 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-46450dc2d5\" not found" node="ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:15.477986 kubelet[2309]: E1106 00:19:15.477962 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:16.481772 kubelet[2309]: E1106 00:19:16.481149 2309 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-46450dc2d5\" not found" node="ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:16.481772 kubelet[2309]: E1106 00:19:16.481200 2309 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-46450dc2d5\" not found" node="ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:16.481772 kubelet[2309]: E1106 00:19:16.481291 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:16.481772 kubelet[2309]: E1106 00:19:16.481323 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:16.481772 kubelet[2309]: E1106 00:19:16.481570 2309 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-46450dc2d5\" not found" node="ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:16.481772 kubelet[2309]: E1106 00:19:16.481704 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:16.603203 kubelet[2309]: I1106 00:19:16.603168 2309 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:17.472666 kubelet[2309]: I1106 00:19:17.472585 2309 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:17.503444 kubelet[2309]: I1106 00:19:17.502737 2309 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:17.564692 kubelet[2309]: E1106 00:19:17.564634 2309 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.1.0-n-46450dc2d5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:17.564692 kubelet[2309]: I1106 00:19:17.564670 2309 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:17.568053 kubelet[2309]: E1106 00:19:17.568014 2309 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.1.0-n-46450dc2d5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:17.568053 kubelet[2309]: I1106 00:19:17.568041 2309 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:17.570265 kubelet[2309]: E1106 00:19:17.570219 2309 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.1.0-n-46450dc2d5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:18.382503 kubelet[2309]: I1106 00:19:18.382337 2309 apiserver.go:52] "Watching apiserver" Nov 6 00:19:18.403355 kubelet[2309]: I1106 00:19:18.403245 2309 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 6 00:19:19.071444 kubelet[2309]: I1106 00:19:19.071373 2309 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:19.082835 kubelet[2309]: I1106 00:19:19.082775 2309 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 00:19:19.084502 kubelet[2309]: E1106 00:19:19.084443 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:19.490027 kubelet[2309]: E1106 00:19:19.489606 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:20.098572 systemd[1]: Reload requested from client PID 2592 ('systemctl') (unit session-7.scope)... Nov 6 00:19:20.098589 systemd[1]: Reloading... Nov 6 00:19:20.249669 zram_generator::config[2647]: No configuration found. Nov 6 00:19:20.526989 systemd[1]: Reloading finished in 427 ms. Nov 6 00:19:20.563255 kubelet[2309]: I1106 00:19:20.563081 2309 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:19:20.563585 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:19:20.581946 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 00:19:20.582267 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:19:20.582365 systemd[1]: kubelet.service: Consumed 1.230s CPU time, 121.9M memory peak. Nov 6 00:19:20.584766 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:19:20.778844 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:19:20.790005 (kubelet)[2686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:19:20.860078 kubelet[2686]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:19:20.860928 kubelet[2686]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:19:20.860928 kubelet[2686]: I1106 00:19:20.860608 2686 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:19:20.868858 kubelet[2686]: I1106 00:19:20.868805 2686 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 6 00:19:20.868858 kubelet[2686]: I1106 00:19:20.868834 2686 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:19:20.868858 kubelet[2686]: I1106 00:19:20.868869 2686 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 6 00:19:20.869099 kubelet[2686]: I1106 00:19:20.868907 2686 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:19:20.869295 kubelet[2686]: I1106 00:19:20.869262 2686 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 00:19:20.874417 kubelet[2686]: I1106 00:19:20.874367 2686 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 6 00:19:20.882094 kubelet[2686]: I1106 00:19:20.881676 2686 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:19:20.887340 kubelet[2686]: I1106 00:19:20.887206 2686 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:19:20.895498 kubelet[2686]: I1106 00:19:20.893980 2686 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 6 00:19:20.895498 kubelet[2686]: I1106 00:19:20.894242 2686 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:19:20.895498 kubelet[2686]: I1106 00:19:20.894293 2686 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.1.0-n-46450dc2d5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:19:20.895498 kubelet[2686]: I1106 00:19:20.894462 2686 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:19:20.895782 kubelet[2686]: I1106 00:19:20.894490 2686 container_manager_linux.go:306] "Creating device plugin manager" Nov 6 00:19:20.895782 kubelet[2686]: I1106 00:19:20.894520 2686 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 6 00:19:20.895782 kubelet[2686]: I1106 00:19:20.895290 2686 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:19:20.895922 kubelet[2686]: I1106 00:19:20.895898 2686 kubelet.go:475] "Attempting to sync node with API server" Nov 6 00:19:20.896020 kubelet[2686]: I1106 00:19:20.896010 2686 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:19:20.896093 kubelet[2686]: I1106 00:19:20.896086 2686 kubelet.go:387] "Adding apiserver pod source" Nov 6 00:19:20.896152 kubelet[2686]: I1106 00:19:20.896146 2686 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:19:20.899729 kubelet[2686]: I1106 00:19:20.899699 2686 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:19:20.900241 kubelet[2686]: I1106 00:19:20.900220 2686 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 00:19:20.900293 kubelet[2686]: I1106 00:19:20.900265 2686 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 6 00:19:20.910450 kubelet[2686]: I1106 00:19:20.909050 2686 server.go:1262] "Started kubelet" Nov 6 00:19:20.913383 kubelet[2686]: I1106 00:19:20.913338 2686 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:19:20.914250 kubelet[2686]: I1106 00:19:20.913854 2686 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:19:20.924368 kubelet[2686]: I1106 00:19:20.924300 2686 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 6 00:19:20.925368 kubelet[2686]: I1106 00:19:20.925051 2686 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:19:20.926316 kubelet[2686]: I1106 00:19:20.925986 2686 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:19:20.933402 kubelet[2686]: I1106 00:19:20.933366 2686 server.go:310] "Adding debug handlers to kubelet server" Nov 6 00:19:20.934719 kubelet[2686]: I1106 00:19:20.934665 2686 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:19:20.952880 kubelet[2686]: I1106 00:19:20.952110 2686 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 6 00:19:20.954821 kubelet[2686]: I1106 00:19:20.954789 2686 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 6 00:19:20.954977 kubelet[2686]: I1106 00:19:20.954948 2686 reconciler.go:29] "Reconciler: start to sync state" Nov 6 00:19:20.958679 kubelet[2686]: I1106 00:19:20.958545 2686 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:19:20.963535 kubelet[2686]: I1106 00:19:20.961596 2686 factory.go:223] Registration of the containerd container factory successfully Nov 6 00:19:20.963535 kubelet[2686]: I1106 00:19:20.961616 2686 factory.go:223] Registration of the systemd container factory successfully Nov 6 00:19:20.973355 kubelet[2686]: I1106 00:19:20.973311 2686 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 6 00:19:20.977110 kubelet[2686]: I1106 00:19:20.976565 2686 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 6 00:19:20.977110 kubelet[2686]: I1106 00:19:20.976593 2686 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 6 00:19:20.977110 kubelet[2686]: I1106 00:19:20.976629 2686 kubelet.go:2427] "Starting kubelet main sync loop" Nov 6 00:19:20.977110 kubelet[2686]: E1106 00:19:20.976679 2686 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:19:21.070865 kubelet[2686]: I1106 00:19:21.068762 2686 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:19:21.070865 kubelet[2686]: I1106 00:19:21.068778 2686 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:19:21.070865 kubelet[2686]: I1106 00:19:21.068801 2686 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:19:21.070865 kubelet[2686]: I1106 00:19:21.069299 2686 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 6 00:19:21.070865 kubelet[2686]: I1106 00:19:21.069491 2686 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 6 00:19:21.070865 kubelet[2686]: I1106 00:19:21.069749 2686 policy_none.go:49] "None policy: Start" Nov 6 00:19:21.070865 kubelet[2686]: I1106 00:19:21.069762 2686 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 6 00:19:21.070865 kubelet[2686]: I1106 00:19:21.069777 2686 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 6 00:19:21.070865 kubelet[2686]: I1106 00:19:21.070817 2686 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 6 00:19:21.070865 kubelet[2686]: I1106 00:19:21.070831 2686 policy_none.go:47] "Start" Nov 6 00:19:21.078550 kubelet[2686]: E1106 00:19:21.078384 2686 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 6 00:19:21.099140 kubelet[2686]: E1106 00:19:21.097880 2686 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 00:19:21.099140 kubelet[2686]: I1106 00:19:21.098173 2686 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:19:21.099140 kubelet[2686]: I1106 00:19:21.098203 2686 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:19:21.106958 kubelet[2686]: I1106 00:19:21.106814 2686 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:19:21.119505 kubelet[2686]: E1106 00:19:21.119358 2686 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:19:21.230843 kubelet[2686]: I1106 00:19:21.230632 2686 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:21.247718 kubelet[2686]: I1106 00:19:21.245571 2686 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:21.247718 kubelet[2686]: I1106 00:19:21.247359 2686 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:21.279646 kubelet[2686]: I1106 00:19:21.279586 2686 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:21.280573 kubelet[2686]: I1106 00:19:21.280303 2686 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:21.280573 kubelet[2686]: I1106 00:19:21.280433 2686 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:21.288906 kubelet[2686]: I1106 00:19:21.288868 2686 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 00:19:21.293285 kubelet[2686]: I1106 00:19:21.292950 2686 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 00:19:21.293847 kubelet[2686]: I1106 00:19:21.293815 2686 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 00:19:21.294138 kubelet[2686]: E1106 00:19:21.294062 2686 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.1.0-n-46450dc2d5\" already exists" pod="kube-system/kube-scheduler-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:21.357591 kubelet[2686]: I1106 00:19:21.357393 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/213fd8ba6ea8403664a7acd7de330ecf-k8s-certs\") pod \"kube-apiserver-ci-4459.1.0-n-46450dc2d5\" (UID: \"213fd8ba6ea8403664a7acd7de330ecf\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:21.358514 kubelet[2686]: I1106 00:19:21.357458 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/213fd8ba6ea8403664a7acd7de330ecf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.1.0-n-46450dc2d5\" (UID: \"213fd8ba6ea8403664a7acd7de330ecf\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:21.358514 kubelet[2686]: I1106 00:19:21.358351 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5fea9a4170d304812a0652a89d72f574-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.1.0-n-46450dc2d5\" (UID: \"5fea9a4170d304812a0652a89d72f574\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:21.358514 kubelet[2686]: I1106 00:19:21.358411 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5fea9a4170d304812a0652a89d72f574-k8s-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-46450dc2d5\" (UID: \"5fea9a4170d304812a0652a89d72f574\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:21.358514 kubelet[2686]: I1106 00:19:21.358438 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5fea9a4170d304812a0652a89d72f574-kubeconfig\") pod \"kube-controller-manager-ci-4459.1.0-n-46450dc2d5\" (UID: \"5fea9a4170d304812a0652a89d72f574\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:21.358796 kubelet[2686]: I1106 00:19:21.358556 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2372855eb8e07e146e4ed057116d43c0-kubeconfig\") pod \"kube-scheduler-ci-4459.1.0-n-46450dc2d5\" (UID: \"2372855eb8e07e146e4ed057116d43c0\") " pod="kube-system/kube-scheduler-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:21.359180 kubelet[2686]: I1106 00:19:21.358608 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/213fd8ba6ea8403664a7acd7de330ecf-ca-certs\") pod \"kube-apiserver-ci-4459.1.0-n-46450dc2d5\" (UID: \"213fd8ba6ea8403664a7acd7de330ecf\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:21.359180 kubelet[2686]: I1106 00:19:21.359049 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5fea9a4170d304812a0652a89d72f574-ca-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-46450dc2d5\" (UID: \"5fea9a4170d304812a0652a89d72f574\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:21.359180 kubelet[2686]: I1106 00:19:21.359111 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5fea9a4170d304812a0652a89d72f574-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.1.0-n-46450dc2d5\" (UID: \"5fea9a4170d304812a0652a89d72f574\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:21.589929 kubelet[2686]: E1106 00:19:21.589882 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:21.594207 kubelet[2686]: E1106 00:19:21.593488 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:21.594550 kubelet[2686]: E1106 00:19:21.594376 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:21.897576 kubelet[2686]: I1106 00:19:21.897526 2686 apiserver.go:52] "Watching apiserver" Nov 6 00:19:21.956579 kubelet[2686]: I1106 00:19:21.955784 2686 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 6 00:19:22.023324 kubelet[2686]: I1106 00:19:22.023046 2686 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:22.024270 kubelet[2686]: E1106 00:19:22.024242 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:22.027586 kubelet[2686]: E1106 00:19:22.026744 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:22.032180 kubelet[2686]: I1106 00:19:22.031740 2686 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 00:19:22.032180 kubelet[2686]: E1106 00:19:22.031799 2686 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.1.0-n-46450dc2d5\" already exists" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-46450dc2d5" Nov 6 00:19:22.032180 kubelet[2686]: E1106 00:19:22.031984 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:22.070041 kubelet[2686]: I1106 00:19:22.069677 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-46450dc2d5" podStartSLOduration=1.069652157 podStartE2EDuration="1.069652157s" podCreationTimestamp="2025-11-06 00:19:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:19:22.068514141 +0000 UTC m=+1.272373673" watchObservedRunningTime="2025-11-06 00:19:22.069652157 +0000 UTC m=+1.273511683" Nov 6 00:19:22.104608 kubelet[2686]: I1106 00:19:22.104443 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.1.0-n-46450dc2d5" podStartSLOduration=1.104420227 podStartE2EDuration="1.104420227s" podCreationTimestamp="2025-11-06 00:19:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:19:22.08483129 +0000 UTC m=+1.288690837" watchObservedRunningTime="2025-11-06 00:19:22.104420227 +0000 UTC m=+1.308279752" Nov 6 00:19:22.119179 kubelet[2686]: I1106 00:19:22.118918 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.1.0-n-46450dc2d5" podStartSLOduration=3.118897722 podStartE2EDuration="3.118897722s" podCreationTimestamp="2025-11-06 00:19:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:19:22.105242466 +0000 UTC m=+1.309101999" watchObservedRunningTime="2025-11-06 00:19:22.118897722 +0000 UTC m=+1.322757256" Nov 6 00:19:23.026334 kubelet[2686]: E1106 00:19:23.025959 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:23.027927 kubelet[2686]: E1106 00:19:23.027175 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:23.029242 kubelet[2686]: E1106 00:19:23.028836 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:24.028864 kubelet[2686]: E1106 00:19:24.027608 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:26.008601 kubelet[2686]: I1106 00:19:26.008559 2686 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 6 00:19:26.009910 kubelet[2686]: I1106 00:19:26.009279 2686 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 6 00:19:26.010037 containerd[1494]: time="2025-11-06T00:19:26.009007771Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 6 00:19:26.408783 kubelet[2686]: E1106 00:19:26.407609 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:26.964678 systemd[1]: Created slice kubepods-besteffort-pod17e5ec4f_e96c_48c0_8ebc_105d7e7314cd.slice - libcontainer container kubepods-besteffort-pod17e5ec4f_e96c_48c0_8ebc_105d7e7314cd.slice. Nov 6 00:19:26.997957 kubelet[2686]: I1106 00:19:26.997877 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17e5ec4f-e96c-48c0-8ebc-105d7e7314cd-xtables-lock\") pod \"kube-proxy-zzj6p\" (UID: \"17e5ec4f-e96c-48c0-8ebc-105d7e7314cd\") " pod="kube-system/kube-proxy-zzj6p" Nov 6 00:19:26.998387 kubelet[2686]: I1106 00:19:26.998234 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17e5ec4f-e96c-48c0-8ebc-105d7e7314cd-lib-modules\") pod \"kube-proxy-zzj6p\" (UID: \"17e5ec4f-e96c-48c0-8ebc-105d7e7314cd\") " pod="kube-system/kube-proxy-zzj6p" Nov 6 00:19:26.998387 kubelet[2686]: I1106 00:19:26.998306 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc9zl\" (UniqueName: \"kubernetes.io/projected/17e5ec4f-e96c-48c0-8ebc-105d7e7314cd-kube-api-access-qc9zl\") pod \"kube-proxy-zzj6p\" (UID: \"17e5ec4f-e96c-48c0-8ebc-105d7e7314cd\") " pod="kube-system/kube-proxy-zzj6p" Nov 6 00:19:26.998555 kubelet[2686]: I1106 00:19:26.998415 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/17e5ec4f-e96c-48c0-8ebc-105d7e7314cd-kube-proxy\") pod \"kube-proxy-zzj6p\" (UID: \"17e5ec4f-e96c-48c0-8ebc-105d7e7314cd\") " pod="kube-system/kube-proxy-zzj6p" Nov 6 00:19:27.035563 kubelet[2686]: E1106 00:19:27.035134 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:27.070703 kubelet[2686]: E1106 00:19:27.070661 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:27.223708 systemd[1]: Created slice kubepods-besteffort-pod4cad66bf_1233_49fe_8c83_ac17eefe5dab.slice - libcontainer container kubepods-besteffort-pod4cad66bf_1233_49fe_8c83_ac17eefe5dab.slice. Nov 6 00:19:27.278127 kubelet[2686]: E1106 00:19:27.278070 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:27.279429 containerd[1494]: time="2025-11-06T00:19:27.279362323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zzj6p,Uid:17e5ec4f-e96c-48c0-8ebc-105d7e7314cd,Namespace:kube-system,Attempt:0,}" Nov 6 00:19:27.300553 kubelet[2686]: I1106 00:19:27.300036 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4cad66bf-1233-49fe-8c83-ac17eefe5dab-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-hqbwm\" (UID: \"4cad66bf-1233-49fe-8c83-ac17eefe5dab\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-hqbwm" Nov 6 00:19:27.300919 kubelet[2686]: I1106 00:19:27.300893 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8prh5\" (UniqueName: \"kubernetes.io/projected/4cad66bf-1233-49fe-8c83-ac17eefe5dab-kube-api-access-8prh5\") pod \"tigera-operator-65cdcdfd6d-hqbwm\" (UID: \"4cad66bf-1233-49fe-8c83-ac17eefe5dab\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-hqbwm" Nov 6 00:19:27.301656 containerd[1494]: time="2025-11-06T00:19:27.301618627Z" level=info msg="connecting to shim 05e5d0fb9e1e3751b831fb71d27e6dc855bf63759988696ec5831cd328e1e019" address="unix:///run/containerd/s/be867ac5678cdcf8d9168e2bd6498efa5ff846143ca1e67c4110c3e173dde515" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:19:27.336759 systemd[1]: Started cri-containerd-05e5d0fb9e1e3751b831fb71d27e6dc855bf63759988696ec5831cd328e1e019.scope - libcontainer container 05e5d0fb9e1e3751b831fb71d27e6dc855bf63759988696ec5831cd328e1e019. Nov 6 00:19:27.374438 containerd[1494]: time="2025-11-06T00:19:27.374387974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zzj6p,Uid:17e5ec4f-e96c-48c0-8ebc-105d7e7314cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"05e5d0fb9e1e3751b831fb71d27e6dc855bf63759988696ec5831cd328e1e019\"" Nov 6 00:19:27.375796 kubelet[2686]: E1106 00:19:27.375762 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:27.382908 containerd[1494]: time="2025-11-06T00:19:27.382857689Z" level=info msg="CreateContainer within sandbox \"05e5d0fb9e1e3751b831fb71d27e6dc855bf63759988696ec5831cd328e1e019\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 6 00:19:27.394315 containerd[1494]: time="2025-11-06T00:19:27.394268210Z" level=info msg="Container ec5c3ebdc41c782f232ad0ef762655297aa08c5e8d433caff09aff0592997a59: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:19:27.396981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1962694982.mount: Deactivated successfully. Nov 6 00:19:27.413920 containerd[1494]: time="2025-11-06T00:19:27.413874719Z" level=info msg="CreateContainer within sandbox \"05e5d0fb9e1e3751b831fb71d27e6dc855bf63759988696ec5831cd328e1e019\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ec5c3ebdc41c782f232ad0ef762655297aa08c5e8d433caff09aff0592997a59\"" Nov 6 00:19:27.415812 containerd[1494]: time="2025-11-06T00:19:27.415775256Z" level=info msg="StartContainer for \"ec5c3ebdc41c782f232ad0ef762655297aa08c5e8d433caff09aff0592997a59\"" Nov 6 00:19:27.422344 containerd[1494]: time="2025-11-06T00:19:27.422288050Z" level=info msg="connecting to shim ec5c3ebdc41c782f232ad0ef762655297aa08c5e8d433caff09aff0592997a59" address="unix:///run/containerd/s/be867ac5678cdcf8d9168e2bd6498efa5ff846143ca1e67c4110c3e173dde515" protocol=ttrpc version=3 Nov 6 00:19:27.449877 systemd[1]: Started cri-containerd-ec5c3ebdc41c782f232ad0ef762655297aa08c5e8d433caff09aff0592997a59.scope - libcontainer container ec5c3ebdc41c782f232ad0ef762655297aa08c5e8d433caff09aff0592997a59. Nov 6 00:19:27.501042 containerd[1494]: time="2025-11-06T00:19:27.500119690Z" level=info msg="StartContainer for \"ec5c3ebdc41c782f232ad0ef762655297aa08c5e8d433caff09aff0592997a59\" returns successfully" Nov 6 00:19:27.533166 containerd[1494]: time="2025-11-06T00:19:27.533069133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-hqbwm,Uid:4cad66bf-1233-49fe-8c83-ac17eefe5dab,Namespace:tigera-operator,Attempt:0,}" Nov 6 00:19:27.566727 containerd[1494]: time="2025-11-06T00:19:27.566557502Z" level=info msg="connecting to shim 8d57736f58b1a6df36f1c0d9102a64e51b242c60752663bef2205a64b1e8e61b" address="unix:///run/containerd/s/949808a14f0a0e91fd54c063a259b9bda3d28ce6222c7509b95447b841f4a683" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:19:27.621721 systemd[1]: Started cri-containerd-8d57736f58b1a6df36f1c0d9102a64e51b242c60752663bef2205a64b1e8e61b.scope - libcontainer container 8d57736f58b1a6df36f1c0d9102a64e51b242c60752663bef2205a64b1e8e61b. Nov 6 00:19:27.698640 containerd[1494]: time="2025-11-06T00:19:27.698585135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-hqbwm,Uid:4cad66bf-1233-49fe-8c83-ac17eefe5dab,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8d57736f58b1a6df36f1c0d9102a64e51b242c60752663bef2205a64b1e8e61b\"" Nov 6 00:19:27.701607 containerd[1494]: time="2025-11-06T00:19:27.701562534Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 6 00:19:28.049652 kubelet[2686]: E1106 00:19:28.049375 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:28.050120 kubelet[2686]: E1106 00:19:28.049653 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:28.050120 kubelet[2686]: E1106 00:19:28.050009 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:28.069612 kubelet[2686]: I1106 00:19:28.069535 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zzj6p" podStartSLOduration=2.06950868 podStartE2EDuration="2.06950868s" podCreationTimestamp="2025-11-06 00:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:19:28.067569838 +0000 UTC m=+7.271429373" watchObservedRunningTime="2025-11-06 00:19:28.06950868 +0000 UTC m=+7.273368214" Nov 6 00:19:29.062051 kubelet[2686]: E1106 00:19:29.061609 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:29.124559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3218002981.mount: Deactivated successfully. Nov 6 00:19:29.962139 containerd[1494]: time="2025-11-06T00:19:29.962068423Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:29.963153 containerd[1494]: time="2025-11-06T00:19:29.963111845Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 6 00:19:29.963590 containerd[1494]: time="2025-11-06T00:19:29.963568085Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:29.965976 containerd[1494]: time="2025-11-06T00:19:29.965919243Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:29.966750 containerd[1494]: time="2025-11-06T00:19:29.966715196Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.265117005s" Nov 6 00:19:29.966871 containerd[1494]: time="2025-11-06T00:19:29.966855513Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 6 00:19:29.972077 containerd[1494]: time="2025-11-06T00:19:29.972025793Z" level=info msg="CreateContainer within sandbox \"8d57736f58b1a6df36f1c0d9102a64e51b242c60752663bef2205a64b1e8e61b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 6 00:19:29.980328 containerd[1494]: time="2025-11-06T00:19:29.979733480Z" level=info msg="Container 870f60b362a2c9de6f217c13f9b0db0497e35f223ec0cc134988e731dd61bcd3: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:19:29.994042 containerd[1494]: time="2025-11-06T00:19:29.993975973Z" level=info msg="CreateContainer within sandbox \"8d57736f58b1a6df36f1c0d9102a64e51b242c60752663bef2205a64b1e8e61b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"870f60b362a2c9de6f217c13f9b0db0497e35f223ec0cc134988e731dd61bcd3\"" Nov 6 00:19:29.995165 containerd[1494]: time="2025-11-06T00:19:29.995133585Z" level=info msg="StartContainer for \"870f60b362a2c9de6f217c13f9b0db0497e35f223ec0cc134988e731dd61bcd3\"" Nov 6 00:19:29.996875 containerd[1494]: time="2025-11-06T00:19:29.996821389Z" level=info msg="connecting to shim 870f60b362a2c9de6f217c13f9b0db0497e35f223ec0cc134988e731dd61bcd3" address="unix:///run/containerd/s/949808a14f0a0e91fd54c063a259b9bda3d28ce6222c7509b95447b841f4a683" protocol=ttrpc version=3 Nov 6 00:19:30.029840 systemd[1]: Started cri-containerd-870f60b362a2c9de6f217c13f9b0db0497e35f223ec0cc134988e731dd61bcd3.scope - libcontainer container 870f60b362a2c9de6f217c13f9b0db0497e35f223ec0cc134988e731dd61bcd3. Nov 6 00:19:30.087712 containerd[1494]: time="2025-11-06T00:19:30.087641879Z" level=info msg="StartContainer for \"870f60b362a2c9de6f217c13f9b0db0497e35f223ec0cc134988e731dd61bcd3\" returns successfully" Nov 6 00:19:31.089622 kubelet[2686]: I1106 00:19:31.089402 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-hqbwm" podStartSLOduration=1.8219361649999999 podStartE2EDuration="4.089382255s" podCreationTimestamp="2025-11-06 00:19:27 +0000 UTC" firstStartedPulling="2025-11-06 00:19:27.700603504 +0000 UTC m=+6.904463032" lastFinishedPulling="2025-11-06 00:19:29.968049607 +0000 UTC m=+9.171909122" observedRunningTime="2025-11-06 00:19:31.089065319 +0000 UTC m=+10.292924854" watchObservedRunningTime="2025-11-06 00:19:31.089382255 +0000 UTC m=+10.293241790" Nov 6 00:19:32.598114 kubelet[2686]: E1106 00:19:32.597748 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:34.861310 update_engine[1475]: I20251106 00:19:34.860534 1475 update_attempter.cc:509] Updating boot flags... Nov 6 00:19:36.741382 sudo[1761]: pam_unix(sudo:session): session closed for user root Nov 6 00:19:36.744450 sshd[1760]: Connection closed by 139.178.68.195 port 42472 Nov 6 00:19:36.745431 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Nov 6 00:19:36.751837 systemd[1]: sshd@6-64.23.183.231:22-139.178.68.195:42472.service: Deactivated successfully. Nov 6 00:19:36.756393 systemd[1]: session-7.scope: Deactivated successfully. Nov 6 00:19:36.758735 systemd[1]: session-7.scope: Consumed 6.432s CPU time, 167.2M memory peak. Nov 6 00:19:36.766950 systemd-logind[1474]: Session 7 logged out. Waiting for processes to exit. Nov 6 00:19:36.769397 systemd-logind[1474]: Removed session 7. Nov 6 00:19:43.842989 systemd[1]: Created slice kubepods-besteffort-pod6e65f593_bb11_4d8d_a6e0_32f839b87632.slice - libcontainer container kubepods-besteffort-pod6e65f593_bb11_4d8d_a6e0_32f839b87632.slice. Nov 6 00:19:43.927214 kubelet[2686]: I1106 00:19:43.927146 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww2xm\" (UniqueName: \"kubernetes.io/projected/6e65f593-bb11-4d8d-a6e0-32f839b87632-kube-api-access-ww2xm\") pod \"calico-typha-cb4687868-xtlmk\" (UID: \"6e65f593-bb11-4d8d-a6e0-32f839b87632\") " pod="calico-system/calico-typha-cb4687868-xtlmk" Nov 6 00:19:43.927214 kubelet[2686]: I1106 00:19:43.927220 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e65f593-bb11-4d8d-a6e0-32f839b87632-tigera-ca-bundle\") pod \"calico-typha-cb4687868-xtlmk\" (UID: \"6e65f593-bb11-4d8d-a6e0-32f839b87632\") " pod="calico-system/calico-typha-cb4687868-xtlmk" Nov 6 00:19:43.927947 kubelet[2686]: I1106 00:19:43.927244 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6e65f593-bb11-4d8d-a6e0-32f839b87632-typha-certs\") pod \"calico-typha-cb4687868-xtlmk\" (UID: \"6e65f593-bb11-4d8d-a6e0-32f839b87632\") " pod="calico-system/calico-typha-cb4687868-xtlmk" Nov 6 00:19:44.014273 systemd[1]: Created slice kubepods-besteffort-pod4712d542_6068_4702_8164_7eb051e8e51f.slice - libcontainer container kubepods-besteffort-pod4712d542_6068_4702_8164_7eb051e8e51f.slice. Nov 6 00:19:44.129174 kubelet[2686]: I1106 00:19:44.129023 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4712d542-6068-4702-8164-7eb051e8e51f-var-run-calico\") pod \"calico-node-ndvs9\" (UID: \"4712d542-6068-4702-8164-7eb051e8e51f\") " pod="calico-system/calico-node-ndvs9" Nov 6 00:19:44.129174 kubelet[2686]: I1106 00:19:44.129074 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2bx2\" (UniqueName: \"kubernetes.io/projected/4712d542-6068-4702-8164-7eb051e8e51f-kube-api-access-p2bx2\") pod \"calico-node-ndvs9\" (UID: \"4712d542-6068-4702-8164-7eb051e8e51f\") " pod="calico-system/calico-node-ndvs9" Nov 6 00:19:44.129174 kubelet[2686]: I1106 00:19:44.129099 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4712d542-6068-4702-8164-7eb051e8e51f-cni-bin-dir\") pod \"calico-node-ndvs9\" (UID: \"4712d542-6068-4702-8164-7eb051e8e51f\") " pod="calico-system/calico-node-ndvs9" Nov 6 00:19:44.129174 kubelet[2686]: I1106 00:19:44.129119 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4712d542-6068-4702-8164-7eb051e8e51f-flexvol-driver-host\") pod \"calico-node-ndvs9\" (UID: \"4712d542-6068-4702-8164-7eb051e8e51f\") " pod="calico-system/calico-node-ndvs9" Nov 6 00:19:44.129174 kubelet[2686]: I1106 00:19:44.129135 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4712d542-6068-4702-8164-7eb051e8e51f-lib-modules\") pod \"calico-node-ndvs9\" (UID: \"4712d542-6068-4702-8164-7eb051e8e51f\") " pod="calico-system/calico-node-ndvs9" Nov 6 00:19:44.129424 kubelet[2686]: I1106 00:19:44.129152 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4712d542-6068-4702-8164-7eb051e8e51f-cni-net-dir\") pod \"calico-node-ndvs9\" (UID: \"4712d542-6068-4702-8164-7eb051e8e51f\") " pod="calico-system/calico-node-ndvs9" Nov 6 00:19:44.129424 kubelet[2686]: I1106 00:19:44.129171 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4712d542-6068-4702-8164-7eb051e8e51f-node-certs\") pod \"calico-node-ndvs9\" (UID: \"4712d542-6068-4702-8164-7eb051e8e51f\") " pod="calico-system/calico-node-ndvs9" Nov 6 00:19:44.129424 kubelet[2686]: I1106 00:19:44.129185 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4712d542-6068-4702-8164-7eb051e8e51f-xtables-lock\") pod \"calico-node-ndvs9\" (UID: \"4712d542-6068-4702-8164-7eb051e8e51f\") " pod="calico-system/calico-node-ndvs9" Nov 6 00:19:44.129424 kubelet[2686]: I1106 00:19:44.129201 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4712d542-6068-4702-8164-7eb051e8e51f-policysync\") pod \"calico-node-ndvs9\" (UID: \"4712d542-6068-4702-8164-7eb051e8e51f\") " pod="calico-system/calico-node-ndvs9" Nov 6 00:19:44.129424 kubelet[2686]: I1106 00:19:44.129216 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4712d542-6068-4702-8164-7eb051e8e51f-var-lib-calico\") pod \"calico-node-ndvs9\" (UID: \"4712d542-6068-4702-8164-7eb051e8e51f\") " pod="calico-system/calico-node-ndvs9" Nov 6 00:19:44.129563 kubelet[2686]: I1106 00:19:44.129233 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4712d542-6068-4702-8164-7eb051e8e51f-tigera-ca-bundle\") pod \"calico-node-ndvs9\" (UID: \"4712d542-6068-4702-8164-7eb051e8e51f\") " pod="calico-system/calico-node-ndvs9" Nov 6 00:19:44.129563 kubelet[2686]: I1106 00:19:44.129249 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4712d542-6068-4702-8164-7eb051e8e51f-cni-log-dir\") pod \"calico-node-ndvs9\" (UID: \"4712d542-6068-4702-8164-7eb051e8e51f\") " pod="calico-system/calico-node-ndvs9" Nov 6 00:19:44.148613 kubelet[2686]: E1106 00:19:44.148573 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:44.149650 containerd[1494]: time="2025-11-06T00:19:44.149599469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cb4687868-xtlmk,Uid:6e65f593-bb11-4d8d-a6e0-32f839b87632,Namespace:calico-system,Attempt:0,}" Nov 6 00:19:44.185247 containerd[1494]: time="2025-11-06T00:19:44.185171611Z" level=info msg="connecting to shim af9f15ae62497fcda28ab40ae79fe29095e7d7b5084334536b49133026f44eb4" address="unix:///run/containerd/s/b7ce599aea78fb2d0cf41bed28b180748c1159e739864ddc7035b7c4bcc4a20a" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:19:44.230360 systemd[1]: Started cri-containerd-af9f15ae62497fcda28ab40ae79fe29095e7d7b5084334536b49133026f44eb4.scope - libcontainer container af9f15ae62497fcda28ab40ae79fe29095e7d7b5084334536b49133026f44eb4. Nov 6 00:19:44.237135 kubelet[2686]: E1106 00:19:44.237101 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.237135 kubelet[2686]: W1106 00:19:44.237125 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.237779 kubelet[2686]: E1106 00:19:44.237697 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.244944 kubelet[2686]: E1106 00:19:44.244882 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.244944 kubelet[2686]: W1106 00:19:44.244907 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.245258 kubelet[2686]: E1106 00:19:44.245206 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.256243 kubelet[2686]: E1106 00:19:44.256087 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.256243 kubelet[2686]: W1106 00:19:44.256117 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.256243 kubelet[2686]: E1106 00:19:44.256189 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.290676 kubelet[2686]: E1106 00:19:44.290360 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fsxfk" podUID="14475adc-4ac3-4f9b-9293-bb510ff52d31" Nov 6 00:19:44.305815 kubelet[2686]: E1106 00:19:44.305581 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.305815 kubelet[2686]: W1106 00:19:44.305611 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.306704 kubelet[2686]: E1106 00:19:44.306506 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.307446 kubelet[2686]: E1106 00:19:44.307013 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.307676 kubelet[2686]: W1106 00:19:44.307518 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.307676 kubelet[2686]: E1106 00:19:44.307547 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.309392 kubelet[2686]: E1106 00:19:44.309362 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.309672 kubelet[2686]: W1106 00:19:44.309437 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.310599 kubelet[2686]: E1106 00:19:44.309745 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.312516 kubelet[2686]: E1106 00:19:44.311211 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.312516 kubelet[2686]: W1106 00:19:44.311233 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.312516 kubelet[2686]: E1106 00:19:44.311263 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.313343 kubelet[2686]: E1106 00:19:44.313164 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.313343 kubelet[2686]: W1106 00:19:44.313191 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.313343 kubelet[2686]: E1106 00:19:44.313215 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.313938 kubelet[2686]: E1106 00:19:44.313848 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.313938 kubelet[2686]: W1106 00:19:44.313863 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.313938 kubelet[2686]: E1106 00:19:44.313880 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.314996 kubelet[2686]: E1106 00:19:44.314976 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.315697 kubelet[2686]: W1106 00:19:44.315219 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.315697 kubelet[2686]: E1106 00:19:44.315246 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.317059 kubelet[2686]: E1106 00:19:44.317026 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.317539 kubelet[2686]: W1106 00:19:44.317045 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.317539 kubelet[2686]: E1106 00:19:44.317173 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.318306 kubelet[2686]: E1106 00:19:44.318048 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.318306 kubelet[2686]: W1106 00:19:44.318123 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.318306 kubelet[2686]: E1106 00:19:44.318147 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.319955 kubelet[2686]: E1106 00:19:44.319855 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.319955 kubelet[2686]: W1106 00:19:44.319874 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.319955 kubelet[2686]: E1106 00:19:44.319898 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.320643 kubelet[2686]: E1106 00:19:44.320541 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.320643 kubelet[2686]: W1106 00:19:44.320560 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.320643 kubelet[2686]: E1106 00:19:44.320581 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.322365 kubelet[2686]: E1106 00:19:44.321060 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.322365 kubelet[2686]: W1106 00:19:44.321078 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.322365 kubelet[2686]: E1106 00:19:44.321097 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.322927 kubelet[2686]: E1106 00:19:44.322883 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.322927 kubelet[2686]: W1106 00:19:44.322907 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.323032 kubelet[2686]: E1106 00:19:44.322937 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.323216 kubelet[2686]: E1106 00:19:44.323200 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.323265 kubelet[2686]: W1106 00:19:44.323216 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.323265 kubelet[2686]: E1106 00:19:44.323232 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.323853 kubelet[2686]: E1106 00:19:44.323427 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.323853 kubelet[2686]: W1106 00:19:44.323441 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.323853 kubelet[2686]: E1106 00:19:44.323454 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.323853 kubelet[2686]: E1106 00:19:44.323848 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.324004 kubelet[2686]: W1106 00:19:44.323862 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.324004 kubelet[2686]: E1106 00:19:44.323880 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.325221 kubelet[2686]: E1106 00:19:44.325183 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.325221 kubelet[2686]: W1106 00:19:44.325207 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.325221 kubelet[2686]: E1106 00:19:44.325226 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.326603 kubelet[2686]: E1106 00:19:44.326570 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.326603 kubelet[2686]: W1106 00:19:44.326601 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.326720 kubelet[2686]: E1106 00:19:44.326622 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.326866 kubelet[2686]: E1106 00:19:44.326821 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:44.328109 containerd[1494]: time="2025-11-06T00:19:44.327974074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ndvs9,Uid:4712d542-6068-4702-8164-7eb051e8e51f,Namespace:calico-system,Attempt:0,}" Nov 6 00:19:44.328525 kubelet[2686]: E1106 00:19:44.328498 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.328525 kubelet[2686]: W1106 00:19:44.328522 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.328752 kubelet[2686]: E1106 00:19:44.328544 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.329720 kubelet[2686]: E1106 00:19:44.329696 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.329720 kubelet[2686]: W1106 00:19:44.329718 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.329828 kubelet[2686]: E1106 00:19:44.329740 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.331109 kubelet[2686]: E1106 00:19:44.330971 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.331109 kubelet[2686]: W1106 00:19:44.330993 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.331109 kubelet[2686]: E1106 00:19:44.331014 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.331109 kubelet[2686]: I1106 00:19:44.331056 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14475adc-4ac3-4f9b-9293-bb510ff52d31-kubelet-dir\") pod \"csi-node-driver-fsxfk\" (UID: \"14475adc-4ac3-4f9b-9293-bb510ff52d31\") " pod="calico-system/csi-node-driver-fsxfk" Nov 6 00:19:44.331931 kubelet[2686]: E1106 00:19:44.331650 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.331931 kubelet[2686]: W1106 00:19:44.331669 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.331931 kubelet[2686]: E1106 00:19:44.331689 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.331931 kubelet[2686]: I1106 00:19:44.331833 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/14475adc-4ac3-4f9b-9293-bb510ff52d31-registration-dir\") pod \"csi-node-driver-fsxfk\" (UID: \"14475adc-4ac3-4f9b-9293-bb510ff52d31\") " pod="calico-system/csi-node-driver-fsxfk" Nov 6 00:19:44.333268 kubelet[2686]: E1106 00:19:44.332431 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.333268 kubelet[2686]: W1106 00:19:44.332446 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.333268 kubelet[2686]: E1106 00:19:44.332481 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.333268 kubelet[2686]: E1106 00:19:44.333002 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.333268 kubelet[2686]: W1106 00:19:44.333017 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.333268 kubelet[2686]: E1106 00:19:44.333035 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.334178 kubelet[2686]: E1106 00:19:44.333570 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.334178 kubelet[2686]: W1106 00:19:44.333585 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.334178 kubelet[2686]: E1106 00:19:44.333602 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.334178 kubelet[2686]: I1106 00:19:44.333639 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/14475adc-4ac3-4f9b-9293-bb510ff52d31-socket-dir\") pod \"csi-node-driver-fsxfk\" (UID: \"14475adc-4ac3-4f9b-9293-bb510ff52d31\") " pod="calico-system/csi-node-driver-fsxfk" Nov 6 00:19:44.334312 kubelet[2686]: E1106 00:19:44.334217 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.334312 kubelet[2686]: W1106 00:19:44.334239 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.334312 kubelet[2686]: E1106 00:19:44.334257 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.335492 kubelet[2686]: I1106 00:19:44.334455 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/14475adc-4ac3-4f9b-9293-bb510ff52d31-varrun\") pod \"csi-node-driver-fsxfk\" (UID: \"14475adc-4ac3-4f9b-9293-bb510ff52d31\") " pod="calico-system/csi-node-driver-fsxfk" Nov 6 00:19:44.335492 kubelet[2686]: E1106 00:19:44.335031 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.335492 kubelet[2686]: W1106 00:19:44.335046 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.335492 kubelet[2686]: E1106 00:19:44.335064 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.335492 kubelet[2686]: E1106 00:19:44.335353 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.335492 kubelet[2686]: W1106 00:19:44.335367 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.335492 kubelet[2686]: E1106 00:19:44.335382 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.336524 kubelet[2686]: E1106 00:19:44.335642 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.336524 kubelet[2686]: W1106 00:19:44.335653 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.336524 kubelet[2686]: E1106 00:19:44.335667 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.336524 kubelet[2686]: I1106 00:19:44.335708 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff5dw\" (UniqueName: \"kubernetes.io/projected/14475adc-4ac3-4f9b-9293-bb510ff52d31-kube-api-access-ff5dw\") pod \"csi-node-driver-fsxfk\" (UID: \"14475adc-4ac3-4f9b-9293-bb510ff52d31\") " pod="calico-system/csi-node-driver-fsxfk" Nov 6 00:19:44.336524 kubelet[2686]: E1106 00:19:44.335978 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.336524 kubelet[2686]: W1106 00:19:44.335993 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.336524 kubelet[2686]: E1106 00:19:44.336008 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.336524 kubelet[2686]: E1106 00:19:44.336281 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.336524 kubelet[2686]: W1106 00:19:44.336294 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.337806 kubelet[2686]: E1106 00:19:44.336309 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.337806 kubelet[2686]: E1106 00:19:44.336638 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.337806 kubelet[2686]: W1106 00:19:44.336651 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.337806 kubelet[2686]: E1106 00:19:44.336666 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.337806 kubelet[2686]: E1106 00:19:44.336894 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.337806 kubelet[2686]: W1106 00:19:44.336905 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.337806 kubelet[2686]: E1106 00:19:44.336920 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.337806 kubelet[2686]: E1106 00:19:44.337168 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.337806 kubelet[2686]: W1106 00:19:44.337180 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.337806 kubelet[2686]: E1106 00:19:44.337194 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.338134 kubelet[2686]: E1106 00:19:44.337534 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.338134 kubelet[2686]: W1106 00:19:44.337550 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.338134 kubelet[2686]: E1106 00:19:44.337564 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.374046 containerd[1494]: time="2025-11-06T00:19:44.373580801Z" level=info msg="connecting to shim 550192ae9d9b31ca54e5025a7f3d9f97b7122e41a3dc4a46c054cb0385644eca" address="unix:///run/containerd/s/1f5003c61f8782dec9c9b31b2725abe25022388c1776f15448f430e720179b7f" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:19:44.415250 systemd[1]: Started cri-containerd-550192ae9d9b31ca54e5025a7f3d9f97b7122e41a3dc4a46c054cb0385644eca.scope - libcontainer container 550192ae9d9b31ca54e5025a7f3d9f97b7122e41a3dc4a46c054cb0385644eca. Nov 6 00:19:44.437849 kubelet[2686]: E1106 00:19:44.437815 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.438147 kubelet[2686]: W1106 00:19:44.438002 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.438147 kubelet[2686]: E1106 00:19:44.438040 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.438496 kubelet[2686]: E1106 00:19:44.438480 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.438652 kubelet[2686]: W1106 00:19:44.438585 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.438652 kubelet[2686]: E1106 00:19:44.438604 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.439184 kubelet[2686]: E1106 00:19:44.439158 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.439184 kubelet[2686]: W1106 00:19:44.439182 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.439329 kubelet[2686]: E1106 00:19:44.439202 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.440165 kubelet[2686]: E1106 00:19:44.440139 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.440165 kubelet[2686]: W1106 00:19:44.440163 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.440328 kubelet[2686]: E1106 00:19:44.440186 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.440747 kubelet[2686]: E1106 00:19:44.440723 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.440747 kubelet[2686]: W1106 00:19:44.440744 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.440848 kubelet[2686]: E1106 00:19:44.440761 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.442944 kubelet[2686]: E1106 00:19:44.442877 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.442944 kubelet[2686]: W1106 00:19:44.442907 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.442944 kubelet[2686]: E1106 00:19:44.442930 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.443406 kubelet[2686]: E1106 00:19:44.443387 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.443456 kubelet[2686]: W1106 00:19:44.443407 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.443456 kubelet[2686]: E1106 00:19:44.443423 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.444826 kubelet[2686]: E1106 00:19:44.444682 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.444826 kubelet[2686]: W1106 00:19:44.444708 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.444826 kubelet[2686]: E1106 00:19:44.444728 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.445637 kubelet[2686]: E1106 00:19:44.445604 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.445637 kubelet[2686]: W1106 00:19:44.445629 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.445733 kubelet[2686]: E1106 00:19:44.445649 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.446551 kubelet[2686]: E1106 00:19:44.446526 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.446551 kubelet[2686]: W1106 00:19:44.446548 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.446664 kubelet[2686]: E1106 00:19:44.446567 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.447962 kubelet[2686]: E1106 00:19:44.447932 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.447962 kubelet[2686]: W1106 00:19:44.447959 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.448104 kubelet[2686]: E1106 00:19:44.447979 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.449519 kubelet[2686]: E1106 00:19:44.449458 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.449519 kubelet[2686]: W1106 00:19:44.449518 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.449658 kubelet[2686]: E1106 00:19:44.449543 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.449894 kubelet[2686]: E1106 00:19:44.449874 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.449934 kubelet[2686]: W1106 00:19:44.449894 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.449934 kubelet[2686]: E1106 00:19:44.449913 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.450744 kubelet[2686]: E1106 00:19:44.450720 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.450810 kubelet[2686]: W1106 00:19:44.450743 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.450810 kubelet[2686]: E1106 00:19:44.450762 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.451606 kubelet[2686]: E1106 00:19:44.451585 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.451655 kubelet[2686]: W1106 00:19:44.451606 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.451655 kubelet[2686]: E1106 00:19:44.451627 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.453104 kubelet[2686]: E1106 00:19:44.453024 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.453104 kubelet[2686]: W1106 00:19:44.453043 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.453104 kubelet[2686]: E1106 00:19:44.453065 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.455913 kubelet[2686]: E1106 00:19:44.455880 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.455913 kubelet[2686]: W1106 00:19:44.455908 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.456248 kubelet[2686]: E1106 00:19:44.455933 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.457662 kubelet[2686]: E1106 00:19:44.457627 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.457662 kubelet[2686]: W1106 00:19:44.457659 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.457802 kubelet[2686]: E1106 00:19:44.457686 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.458870 kubelet[2686]: E1106 00:19:44.458844 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.458941 kubelet[2686]: W1106 00:19:44.458868 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.458941 kubelet[2686]: E1106 00:19:44.458894 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.459612 kubelet[2686]: E1106 00:19:44.459581 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.459612 kubelet[2686]: W1106 00:19:44.459606 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.459751 kubelet[2686]: E1106 00:19:44.459627 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.460901 kubelet[2686]: E1106 00:19:44.460877 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.460901 kubelet[2686]: W1106 00:19:44.460899 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.461005 kubelet[2686]: E1106 00:19:44.460920 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.461354 kubelet[2686]: E1106 00:19:44.461215 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.461354 kubelet[2686]: W1106 00:19:44.461234 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.461354 kubelet[2686]: E1106 00:19:44.461250 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.462575 kubelet[2686]: E1106 00:19:44.462547 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.462575 kubelet[2686]: W1106 00:19:44.462573 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.462667 kubelet[2686]: E1106 00:19:44.462592 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.464006 kubelet[2686]: E1106 00:19:44.463980 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.464006 kubelet[2686]: W1106 00:19:44.464003 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.464145 kubelet[2686]: E1106 00:19:44.464023 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.465254 kubelet[2686]: E1106 00:19:44.465143 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.465254 kubelet[2686]: W1106 00:19:44.465168 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.465254 kubelet[2686]: E1106 00:19:44.465187 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.516413 kubelet[2686]: E1106 00:19:44.515634 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:44.516413 kubelet[2686]: W1106 00:19:44.515666 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:44.516413 kubelet[2686]: E1106 00:19:44.515704 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:44.539548 containerd[1494]: time="2025-11-06T00:19:44.538549664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cb4687868-xtlmk,Uid:6e65f593-bb11-4d8d-a6e0-32f839b87632,Namespace:calico-system,Attempt:0,} returns sandbox id \"af9f15ae62497fcda28ab40ae79fe29095e7d7b5084334536b49133026f44eb4\"" Nov 6 00:19:44.541992 kubelet[2686]: E1106 00:19:44.541575 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:44.546177 containerd[1494]: time="2025-11-06T00:19:44.546131859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 6 00:19:44.555628 containerd[1494]: time="2025-11-06T00:19:44.555449994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ndvs9,Uid:4712d542-6068-4702-8164-7eb051e8e51f,Namespace:calico-system,Attempt:0,} returns sandbox id \"550192ae9d9b31ca54e5025a7f3d9f97b7122e41a3dc4a46c054cb0385644eca\"" Nov 6 00:19:44.558226 kubelet[2686]: E1106 00:19:44.558165 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:45.868601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount590545195.mount: Deactivated successfully. Nov 6 00:19:45.977485 kubelet[2686]: E1106 00:19:45.977383 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fsxfk" podUID="14475adc-4ac3-4f9b-9293-bb510ff52d31" Nov 6 00:19:46.752857 containerd[1494]: time="2025-11-06T00:19:46.752804361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:46.754507 containerd[1494]: time="2025-11-06T00:19:46.754169033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 6 00:19:46.755129 containerd[1494]: time="2025-11-06T00:19:46.755092137Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:46.757959 containerd[1494]: time="2025-11-06T00:19:46.757914008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:46.759107 containerd[1494]: time="2025-11-06T00:19:46.759059848Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.212150504s" Nov 6 00:19:46.759388 containerd[1494]: time="2025-11-06T00:19:46.759213582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 6 00:19:46.761412 containerd[1494]: time="2025-11-06T00:19:46.761391092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 6 00:19:46.782531 containerd[1494]: time="2025-11-06T00:19:46.781557926Z" level=info msg="CreateContainer within sandbox \"af9f15ae62497fcda28ab40ae79fe29095e7d7b5084334536b49133026f44eb4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 6 00:19:46.786679 containerd[1494]: time="2025-11-06T00:19:46.786639638Z" level=info msg="Container 3f5b7bfe5d303b77a2e2b9b31c1aa28a594da7786f8f5017c468fa60674468f3: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:19:46.795504 containerd[1494]: time="2025-11-06T00:19:46.795434115Z" level=info msg="CreateContainer within sandbox \"af9f15ae62497fcda28ab40ae79fe29095e7d7b5084334536b49133026f44eb4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3f5b7bfe5d303b77a2e2b9b31c1aa28a594da7786f8f5017c468fa60674468f3\"" Nov 6 00:19:46.796676 containerd[1494]: time="2025-11-06T00:19:46.796643710Z" level=info msg="StartContainer for \"3f5b7bfe5d303b77a2e2b9b31c1aa28a594da7786f8f5017c468fa60674468f3\"" Nov 6 00:19:46.798049 containerd[1494]: time="2025-11-06T00:19:46.798016463Z" level=info msg="connecting to shim 3f5b7bfe5d303b77a2e2b9b31c1aa28a594da7786f8f5017c468fa60674468f3" address="unix:///run/containerd/s/b7ce599aea78fb2d0cf41bed28b180748c1159e739864ddc7035b7c4bcc4a20a" protocol=ttrpc version=3 Nov 6 00:19:46.831721 systemd[1]: Started cri-containerd-3f5b7bfe5d303b77a2e2b9b31c1aa28a594da7786f8f5017c468fa60674468f3.scope - libcontainer container 3f5b7bfe5d303b77a2e2b9b31c1aa28a594da7786f8f5017c468fa60674468f3. Nov 6 00:19:46.917510 containerd[1494]: time="2025-11-06T00:19:46.915966195Z" level=info msg="StartContainer for \"3f5b7bfe5d303b77a2e2b9b31c1aa28a594da7786f8f5017c468fa60674468f3\" returns successfully" Nov 6 00:19:47.140892 kubelet[2686]: E1106 00:19:47.140775 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:47.152569 kubelet[2686]: E1106 00:19:47.152519 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.154633 kubelet[2686]: W1106 00:19:47.152544 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.154633 kubelet[2686]: E1106 00:19:47.154559 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.155290 kubelet[2686]: E1106 00:19:47.155236 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.155290 kubelet[2686]: W1106 00:19:47.155253 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.155553 kubelet[2686]: E1106 00:19:47.155273 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.156042 kubelet[2686]: E1106 00:19:47.155983 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.156042 kubelet[2686]: W1106 00:19:47.155996 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.156042 kubelet[2686]: E1106 00:19:47.156006 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.158907 kubelet[2686]: E1106 00:19:47.158748 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.158907 kubelet[2686]: W1106 00:19:47.158765 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.158907 kubelet[2686]: E1106 00:19:47.158784 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.159707 kubelet[2686]: E1106 00:19:47.159510 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.159707 kubelet[2686]: W1106 00:19:47.159531 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.159707 kubelet[2686]: E1106 00:19:47.159569 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.160069 kubelet[2686]: E1106 00:19:47.159972 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.160069 kubelet[2686]: W1106 00:19:47.160000 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.160069 kubelet[2686]: E1106 00:19:47.160011 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.160455 kubelet[2686]: E1106 00:19:47.160418 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.160455 kubelet[2686]: W1106 00:19:47.160430 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.160455 kubelet[2686]: E1106 00:19:47.160441 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.160869 kubelet[2686]: E1106 00:19:47.160810 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.160869 kubelet[2686]: W1106 00:19:47.160830 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.160869 kubelet[2686]: E1106 00:19:47.160844 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.161814 kubelet[2686]: E1106 00:19:47.161772 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.161814 kubelet[2686]: W1106 00:19:47.161785 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.161814 kubelet[2686]: E1106 00:19:47.161797 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.162611 kubelet[2686]: E1106 00:19:47.162590 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.162790 kubelet[2686]: W1106 00:19:47.162682 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.162790 kubelet[2686]: E1106 00:19:47.162702 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.162987 kubelet[2686]: E1106 00:19:47.162977 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.163553 kubelet[2686]: W1106 00:19:47.163535 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.163695 kubelet[2686]: E1106 00:19:47.163614 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.164609 kubelet[2686]: E1106 00:19:47.164586 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.164759 kubelet[2686]: W1106 00:19:47.164690 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.164759 kubelet[2686]: E1106 00:19:47.164707 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.165840 kubelet[2686]: E1106 00:19:47.165821 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.165995 kubelet[2686]: W1106 00:19:47.165890 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.165995 kubelet[2686]: E1106 00:19:47.165904 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.166485 kubelet[2686]: E1106 00:19:47.166334 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.166485 kubelet[2686]: W1106 00:19:47.166350 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.166485 kubelet[2686]: E1106 00:19:47.166361 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.167751 kubelet[2686]: E1106 00:19:47.167686 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.167751 kubelet[2686]: W1106 00:19:47.167701 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.167751 kubelet[2686]: E1106 00:19:47.167713 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.168694 kubelet[2686]: E1106 00:19:47.168676 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.168694 kubelet[2686]: W1106 00:19:47.168692 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.168828 kubelet[2686]: E1106 00:19:47.168705 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.168926 kubelet[2686]: E1106 00:19:47.168916 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.168926 kubelet[2686]: W1106 00:19:47.168926 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.169006 kubelet[2686]: E1106 00:19:47.168935 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.169128 kubelet[2686]: E1106 00:19:47.169117 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.169128 kubelet[2686]: W1106 00:19:47.169127 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.169208 kubelet[2686]: E1106 00:19:47.169135 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.169338 kubelet[2686]: E1106 00:19:47.169328 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.169338 kubelet[2686]: W1106 00:19:47.169336 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.169941 kubelet[2686]: E1106 00:19:47.169344 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.169941 kubelet[2686]: E1106 00:19:47.169514 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.169941 kubelet[2686]: W1106 00:19:47.169521 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.169941 kubelet[2686]: E1106 00:19:47.169618 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.169941 kubelet[2686]: E1106 00:19:47.169756 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.169941 kubelet[2686]: W1106 00:19:47.169764 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.169941 kubelet[2686]: E1106 00:19:47.169771 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.171666 kubelet[2686]: E1106 00:19:47.171647 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.171666 kubelet[2686]: W1106 00:19:47.171663 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.171980 kubelet[2686]: E1106 00:19:47.171677 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.171980 kubelet[2686]: E1106 00:19:47.171859 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.171980 kubelet[2686]: W1106 00:19:47.171866 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.171980 kubelet[2686]: E1106 00:19:47.171873 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.172131 kubelet[2686]: E1106 00:19:47.172012 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.172131 kubelet[2686]: W1106 00:19:47.172070 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.172131 kubelet[2686]: E1106 00:19:47.172084 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.172495 kubelet[2686]: E1106 00:19:47.172248 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.172495 kubelet[2686]: W1106 00:19:47.172260 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.172495 kubelet[2686]: E1106 00:19:47.172268 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.172495 kubelet[2686]: E1106 00:19:47.172386 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.172495 kubelet[2686]: W1106 00:19:47.172392 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.172495 kubelet[2686]: E1106 00:19:47.172398 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.172695 kubelet[2686]: E1106 00:19:47.172562 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.172695 kubelet[2686]: W1106 00:19:47.172569 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.172695 kubelet[2686]: E1106 00:19:47.172576 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.173143 kubelet[2686]: E1106 00:19:47.173109 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.173143 kubelet[2686]: W1106 00:19:47.173127 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.173143 kubelet[2686]: E1106 00:19:47.173140 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.173549 kubelet[2686]: E1106 00:19:47.173297 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.173549 kubelet[2686]: W1106 00:19:47.173303 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.173549 kubelet[2686]: E1106 00:19:47.173311 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.173549 kubelet[2686]: E1106 00:19:47.173506 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.173549 kubelet[2686]: W1106 00:19:47.173515 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.173549 kubelet[2686]: E1106 00:19:47.173523 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.173724 kubelet[2686]: E1106 00:19:47.173668 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.173724 kubelet[2686]: W1106 00:19:47.173674 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.173724 kubelet[2686]: E1106 00:19:47.173682 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.173876 kubelet[2686]: E1106 00:19:47.173859 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.173876 kubelet[2686]: W1106 00:19:47.173868 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.173876 kubelet[2686]: E1106 00:19:47.173875 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.174190 kubelet[2686]: E1106 00:19:47.174178 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:47.174190 kubelet[2686]: W1106 00:19:47.174187 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:47.174259 kubelet[2686]: E1106 00:19:47.174195 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:47.978134 kubelet[2686]: E1106 00:19:47.978080 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fsxfk" podUID="14475adc-4ac3-4f9b-9293-bb510ff52d31" Nov 6 00:19:48.144218 kubelet[2686]: I1106 00:19:48.144166 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 00:19:48.145545 kubelet[2686]: E1106 00:19:48.145500 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:48.175604 kubelet[2686]: E1106 00:19:48.175560 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.175604 kubelet[2686]: W1106 00:19:48.175587 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.175604 kubelet[2686]: E1106 00:19:48.175615 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.176257 kubelet[2686]: E1106 00:19:48.176237 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.176257 kubelet[2686]: W1106 00:19:48.176251 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.176319 kubelet[2686]: E1106 00:19:48.176266 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.176751 kubelet[2686]: E1106 00:19:48.176688 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.176751 kubelet[2686]: W1106 00:19:48.176704 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.176751 kubelet[2686]: E1106 00:19:48.176716 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.176970 kubelet[2686]: E1106 00:19:48.176918 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.176970 kubelet[2686]: W1106 00:19:48.176926 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.176970 kubelet[2686]: E1106 00:19:48.176935 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.177249 kubelet[2686]: E1106 00:19:48.177163 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.177249 kubelet[2686]: W1106 00:19:48.177170 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.177249 kubelet[2686]: E1106 00:19:48.177179 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.177537 kubelet[2686]: E1106 00:19:48.177501 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.177537 kubelet[2686]: W1106 00:19:48.177509 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.177537 kubelet[2686]: E1106 00:19:48.177520 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.177843 kubelet[2686]: E1106 00:19:48.177811 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.177843 kubelet[2686]: W1106 00:19:48.177823 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.177843 kubelet[2686]: E1106 00:19:48.177832 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.178306 kubelet[2686]: E1106 00:19:48.178276 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.178306 kubelet[2686]: W1106 00:19:48.178294 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.178395 kubelet[2686]: E1106 00:19:48.178311 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.178662 kubelet[2686]: E1106 00:19:48.178633 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.178662 kubelet[2686]: W1106 00:19:48.178647 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.178662 kubelet[2686]: E1106 00:19:48.178658 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.179225 kubelet[2686]: E1106 00:19:48.179199 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.179225 kubelet[2686]: W1106 00:19:48.179218 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.179308 kubelet[2686]: E1106 00:19:48.179237 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.179880 kubelet[2686]: E1106 00:19:48.179809 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.179880 kubelet[2686]: W1106 00:19:48.179826 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.179880 kubelet[2686]: E1106 00:19:48.179838 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.180150 kubelet[2686]: E1106 00:19:48.180133 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.180150 kubelet[2686]: W1106 00:19:48.180148 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.180430 kubelet[2686]: E1106 00:19:48.180160 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.181557 kubelet[2686]: E1106 00:19:48.181534 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.181655 kubelet[2686]: W1106 00:19:48.181549 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.181799 kubelet[2686]: E1106 00:19:48.181659 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.182418 kubelet[2686]: E1106 00:19:48.182098 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.182418 kubelet[2686]: W1106 00:19:48.182112 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.182418 kubelet[2686]: E1106 00:19:48.182222 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.182561 kubelet[2686]: E1106 00:19:48.182549 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.182561 kubelet[2686]: W1106 00:19:48.182558 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.182649 kubelet[2686]: E1106 00:19:48.182569 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.229657 containerd[1494]: time="2025-11-06T00:19:48.229525214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:48.231591 containerd[1494]: time="2025-11-06T00:19:48.231391748Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 6 00:19:48.232519 containerd[1494]: time="2025-11-06T00:19:48.232488987Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:48.234881 containerd[1494]: time="2025-11-06T00:19:48.234824985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:48.236629 containerd[1494]: time="2025-11-06T00:19:48.236576417Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.475019101s" Nov 6 00:19:48.236629 containerd[1494]: time="2025-11-06T00:19:48.236636906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 6 00:19:48.242779 containerd[1494]: time="2025-11-06T00:19:48.242719496Z" level=info msg="CreateContainer within sandbox \"550192ae9d9b31ca54e5025a7f3d9f97b7122e41a3dc4a46c054cb0385644eca\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 6 00:19:48.276495 kubelet[2686]: E1106 00:19:48.276180 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.276495 kubelet[2686]: W1106 00:19:48.276211 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.276495 kubelet[2686]: E1106 00:19:48.276253 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.276836 kubelet[2686]: E1106 00:19:48.276570 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.276836 kubelet[2686]: W1106 00:19:48.276580 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.276836 kubelet[2686]: E1106 00:19:48.276592 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.276927 kubelet[2686]: E1106 00:19:48.276857 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.276927 kubelet[2686]: W1106 00:19:48.276867 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.276927 kubelet[2686]: E1106 00:19:48.276878 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.277422 kubelet[2686]: E1106 00:19:48.277177 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.277422 kubelet[2686]: W1106 00:19:48.277190 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.277422 kubelet[2686]: E1106 00:19:48.277201 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.277597 kubelet[2686]: E1106 00:19:48.277445 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.277597 kubelet[2686]: W1106 00:19:48.277454 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.277597 kubelet[2686]: E1106 00:19:48.277462 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.277975 kubelet[2686]: E1106 00:19:48.277746 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.277975 kubelet[2686]: W1106 00:19:48.277756 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.277975 kubelet[2686]: E1106 00:19:48.277767 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.278087 kubelet[2686]: E1106 00:19:48.277992 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.278087 kubelet[2686]: W1106 00:19:48.277998 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.278087 kubelet[2686]: E1106 00:19:48.278006 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.278250 kubelet[2686]: E1106 00:19:48.278221 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.278250 kubelet[2686]: W1106 00:19:48.278227 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.278250 kubelet[2686]: E1106 00:19:48.278235 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.278721 kubelet[2686]: E1106 00:19:48.278509 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.278721 kubelet[2686]: W1106 00:19:48.278525 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.278721 kubelet[2686]: E1106 00:19:48.278538 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.278909 kubelet[2686]: E1106 00:19:48.278846 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.278909 kubelet[2686]: W1106 00:19:48.278857 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.278909 kubelet[2686]: E1106 00:19:48.278869 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.280045 kubelet[2686]: E1106 00:19:48.279209 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.280045 kubelet[2686]: W1106 00:19:48.279237 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.280045 kubelet[2686]: E1106 00:19:48.279247 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.280159 kubelet[2686]: E1106 00:19:48.280078 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.280159 kubelet[2686]: W1106 00:19:48.280092 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.280159 kubelet[2686]: E1106 00:19:48.280103 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.280456 kubelet[2686]: E1106 00:19:48.280389 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.280456 kubelet[2686]: W1106 00:19:48.280400 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.280456 kubelet[2686]: E1106 00:19:48.280409 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.281726 kubelet[2686]: E1106 00:19:48.280729 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.281726 kubelet[2686]: W1106 00:19:48.280738 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.281726 kubelet[2686]: E1106 00:19:48.280747 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.281726 kubelet[2686]: E1106 00:19:48.281018 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.281726 kubelet[2686]: W1106 00:19:48.281031 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.281726 kubelet[2686]: E1106 00:19:48.281092 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.281726 kubelet[2686]: E1106 00:19:48.281311 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.281726 kubelet[2686]: W1106 00:19:48.281319 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.281726 kubelet[2686]: E1106 00:19:48.281328 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.282523 kubelet[2686]: E1106 00:19:48.282160 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.282523 kubelet[2686]: W1106 00:19:48.282174 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.282523 kubelet[2686]: E1106 00:19:48.282185 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.282523 kubelet[2686]: E1106 00:19:48.282427 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:19:48.282523 kubelet[2686]: W1106 00:19:48.282435 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:19:48.282523 kubelet[2686]: E1106 00:19:48.282444 2686 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:19:48.290503 containerd[1494]: time="2025-11-06T00:19:48.289749984Z" level=info msg="Container 8b099bb0fa5b58335d6c0d2948cfc9f8dbd1713c4060ba124059648b5b6ba5f3: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:19:48.297983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1910231504.mount: Deactivated successfully. Nov 6 00:19:48.305687 containerd[1494]: time="2025-11-06T00:19:48.305556744Z" level=info msg="CreateContainer within sandbox \"550192ae9d9b31ca54e5025a7f3d9f97b7122e41a3dc4a46c054cb0385644eca\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8b099bb0fa5b58335d6c0d2948cfc9f8dbd1713c4060ba124059648b5b6ba5f3\"" Nov 6 00:19:48.308840 containerd[1494]: time="2025-11-06T00:19:48.308697384Z" level=info msg="StartContainer for \"8b099bb0fa5b58335d6c0d2948cfc9f8dbd1713c4060ba124059648b5b6ba5f3\"" Nov 6 00:19:48.324123 containerd[1494]: time="2025-11-06T00:19:48.323893663Z" level=info msg="connecting to shim 8b099bb0fa5b58335d6c0d2948cfc9f8dbd1713c4060ba124059648b5b6ba5f3" address="unix:///run/containerd/s/1f5003c61f8782dec9c9b31b2725abe25022388c1776f15448f430e720179b7f" protocol=ttrpc version=3 Nov 6 00:19:48.359811 systemd[1]: Started cri-containerd-8b099bb0fa5b58335d6c0d2948cfc9f8dbd1713c4060ba124059648b5b6ba5f3.scope - libcontainer container 8b099bb0fa5b58335d6c0d2948cfc9f8dbd1713c4060ba124059648b5b6ba5f3. Nov 6 00:19:48.421710 containerd[1494]: time="2025-11-06T00:19:48.421668204Z" level=info msg="StartContainer for \"8b099bb0fa5b58335d6c0d2948cfc9f8dbd1713c4060ba124059648b5b6ba5f3\" returns successfully" Nov 6 00:19:48.438702 systemd[1]: cri-containerd-8b099bb0fa5b58335d6c0d2948cfc9f8dbd1713c4060ba124059648b5b6ba5f3.scope: Deactivated successfully. Nov 6 00:19:48.478084 containerd[1494]: time="2025-11-06T00:19:48.477959157Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8b099bb0fa5b58335d6c0d2948cfc9f8dbd1713c4060ba124059648b5b6ba5f3\" id:\"8b099bb0fa5b58335d6c0d2948cfc9f8dbd1713c4060ba124059648b5b6ba5f3\" pid:3413 exited_at:{seconds:1762388388 nanos:442740869}" Nov 6 00:19:48.493621 containerd[1494]: time="2025-11-06T00:19:48.493424122Z" level=info msg="received exit event container_id:\"8b099bb0fa5b58335d6c0d2948cfc9f8dbd1713c4060ba124059648b5b6ba5f3\" id:\"8b099bb0fa5b58335d6c0d2948cfc9f8dbd1713c4060ba124059648b5b6ba5f3\" pid:3413 exited_at:{seconds:1762388388 nanos:442740869}" Nov 6 00:19:48.547055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b099bb0fa5b58335d6c0d2948cfc9f8dbd1713c4060ba124059648b5b6ba5f3-rootfs.mount: Deactivated successfully. Nov 6 00:19:49.156081 kubelet[2686]: E1106 00:19:49.155612 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:49.157713 containerd[1494]: time="2025-11-06T00:19:49.157661478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 6 00:19:49.176993 kubelet[2686]: I1106 00:19:49.176721 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-cb4687868-xtlmk" podStartSLOduration=3.96155964 podStartE2EDuration="6.176692818s" podCreationTimestamp="2025-11-06 00:19:43 +0000 UTC" firstStartedPulling="2025-11-06 00:19:44.545063614 +0000 UTC m=+23.748923150" lastFinishedPulling="2025-11-06 00:19:46.760196815 +0000 UTC m=+25.964056328" observedRunningTime="2025-11-06 00:19:47.210029761 +0000 UTC m=+26.413889295" watchObservedRunningTime="2025-11-06 00:19:49.176692818 +0000 UTC m=+28.380552353" Nov 6 00:19:49.978394 kubelet[2686]: E1106 00:19:49.977057 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fsxfk" podUID="14475adc-4ac3-4f9b-9293-bb510ff52d31" Nov 6 00:19:51.980214 kubelet[2686]: E1106 00:19:51.980061 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fsxfk" podUID="14475adc-4ac3-4f9b-9293-bb510ff52d31" Nov 6 00:19:52.946240 containerd[1494]: time="2025-11-06T00:19:52.946186388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:52.948054 containerd[1494]: time="2025-11-06T00:19:52.948016547Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 6 00:19:52.948927 containerd[1494]: time="2025-11-06T00:19:52.948862185Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:52.951028 containerd[1494]: time="2025-11-06T00:19:52.950975858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:19:52.952132 containerd[1494]: time="2025-11-06T00:19:52.951946896Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.794145203s" Nov 6 00:19:52.952132 containerd[1494]: time="2025-11-06T00:19:52.952045141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 6 00:19:52.957676 containerd[1494]: time="2025-11-06T00:19:52.956967481Z" level=info msg="CreateContainer within sandbox \"550192ae9d9b31ca54e5025a7f3d9f97b7122e41a3dc4a46c054cb0385644eca\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 6 00:19:52.990508 containerd[1494]: time="2025-11-06T00:19:52.989644664Z" level=info msg="Container 0d5bd80dfbf08e7bc69bafc5162330fdf2ecabe90d1fd0ac613d9a0fa30b4360: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:19:52.996949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount529418070.mount: Deactivated successfully. Nov 6 00:19:53.012718 containerd[1494]: time="2025-11-06T00:19:53.012668250Z" level=info msg="CreateContainer within sandbox \"550192ae9d9b31ca54e5025a7f3d9f97b7122e41a3dc4a46c054cb0385644eca\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0d5bd80dfbf08e7bc69bafc5162330fdf2ecabe90d1fd0ac613d9a0fa30b4360\"" Nov 6 00:19:53.014900 containerd[1494]: time="2025-11-06T00:19:53.014719702Z" level=info msg="StartContainer for \"0d5bd80dfbf08e7bc69bafc5162330fdf2ecabe90d1fd0ac613d9a0fa30b4360\"" Nov 6 00:19:53.018090 containerd[1494]: time="2025-11-06T00:19:53.017871828Z" level=info msg="connecting to shim 0d5bd80dfbf08e7bc69bafc5162330fdf2ecabe90d1fd0ac613d9a0fa30b4360" address="unix:///run/containerd/s/1f5003c61f8782dec9c9b31b2725abe25022388c1776f15448f430e720179b7f" protocol=ttrpc version=3 Nov 6 00:19:53.048703 systemd[1]: Started cri-containerd-0d5bd80dfbf08e7bc69bafc5162330fdf2ecabe90d1fd0ac613d9a0fa30b4360.scope - libcontainer container 0d5bd80dfbf08e7bc69bafc5162330fdf2ecabe90d1fd0ac613d9a0fa30b4360. Nov 6 00:19:53.119690 containerd[1494]: time="2025-11-06T00:19:53.119612571Z" level=info msg="StartContainer for \"0d5bd80dfbf08e7bc69bafc5162330fdf2ecabe90d1fd0ac613d9a0fa30b4360\" returns successfully" Nov 6 00:19:53.179706 kubelet[2686]: E1106 00:19:53.179073 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:53.765760 systemd[1]: cri-containerd-0d5bd80dfbf08e7bc69bafc5162330fdf2ecabe90d1fd0ac613d9a0fa30b4360.scope: Deactivated successfully. Nov 6 00:19:53.766028 systemd[1]: cri-containerd-0d5bd80dfbf08e7bc69bafc5162330fdf2ecabe90d1fd0ac613d9a0fa30b4360.scope: Consumed 664ms CPU time, 159.4M memory peak, 10.6M read from disk, 171.3M written to disk. Nov 6 00:19:53.810662 containerd[1494]: time="2025-11-06T00:19:53.810621685Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d5bd80dfbf08e7bc69bafc5162330fdf2ecabe90d1fd0ac613d9a0fa30b4360\" id:\"0d5bd80dfbf08e7bc69bafc5162330fdf2ecabe90d1fd0ac613d9a0fa30b4360\" pid:3469 exited_at:{seconds:1762388393 nanos:782367525}" Nov 6 00:19:53.811887 containerd[1494]: time="2025-11-06T00:19:53.811743843Z" level=info msg="received exit event container_id:\"0d5bd80dfbf08e7bc69bafc5162330fdf2ecabe90d1fd0ac613d9a0fa30b4360\" id:\"0d5bd80dfbf08e7bc69bafc5162330fdf2ecabe90d1fd0ac613d9a0fa30b4360\" pid:3469 exited_at:{seconds:1762388393 nanos:782367525}" Nov 6 00:19:53.817238 kubelet[2686]: I1106 00:19:53.817134 2686 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 6 00:19:53.861747 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d5bd80dfbf08e7bc69bafc5162330fdf2ecabe90d1fd0ac613d9a0fa30b4360-rootfs.mount: Deactivated successfully. Nov 6 00:19:53.893489 systemd[1]: Created slice kubepods-burstable-pode52578e9_6c1c_4ea4_bb10_32c4b4007c8c.slice - libcontainer container kubepods-burstable-pode52578e9_6c1c_4ea4_bb10_32c4b4007c8c.slice. Nov 6 00:19:53.908212 systemd[1]: Created slice kubepods-burstable-poda63c9258_20b6_4f02_98b1_7ffadf516e5e.slice - libcontainer container kubepods-burstable-poda63c9258_20b6_4f02_98b1_7ffadf516e5e.slice. Nov 6 00:19:53.933360 kubelet[2686]: I1106 00:19:53.932811 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e52578e9-6c1c-4ea4-bb10-32c4b4007c8c-config-volume\") pod \"coredns-66bc5c9577-jf6gr\" (UID: \"e52578e9-6c1c-4ea4-bb10-32c4b4007c8c\") " pod="kube-system/coredns-66bc5c9577-jf6gr" Nov 6 00:19:53.934500 kubelet[2686]: I1106 00:19:53.934022 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbzsw\" (UniqueName: \"kubernetes.io/projected/e52578e9-6c1c-4ea4-bb10-32c4b4007c8c-kube-api-access-gbzsw\") pod \"coredns-66bc5c9577-jf6gr\" (UID: \"e52578e9-6c1c-4ea4-bb10-32c4b4007c8c\") " pod="kube-system/coredns-66bc5c9577-jf6gr" Nov 6 00:19:53.941867 systemd[1]: Created slice kubepods-besteffort-pod221557dd_55b6_4b8e_a63d_bc09352c8c41.slice - libcontainer container kubepods-besteffort-pod221557dd_55b6_4b8e_a63d_bc09352c8c41.slice. Nov 6 00:19:53.951837 systemd[1]: Created slice kubepods-besteffort-podfa727a0e_c8e3_4851_8c43_fa33e679ce52.slice - libcontainer container kubepods-besteffort-podfa727a0e_c8e3_4851_8c43_fa33e679ce52.slice. Nov 6 00:19:53.960186 systemd[1]: Created slice kubepods-besteffort-podbc2bfd40_07fc_45df_b493_7140e7f7d72c.slice - libcontainer container kubepods-besteffort-podbc2bfd40_07fc_45df_b493_7140e7f7d72c.slice. Nov 6 00:19:53.969784 systemd[1]: Created slice kubepods-besteffort-podd092cd15_7a3f_47f6_bde9_78a01defdd36.slice - libcontainer container kubepods-besteffort-podd092cd15_7a3f_47f6_bde9_78a01defdd36.slice. Nov 6 00:19:53.977518 systemd[1]: Created slice kubepods-besteffort-pod22d06117_b04a_43e9_87e1_aa14b7fcef4f.slice - libcontainer container kubepods-besteffort-pod22d06117_b04a_43e9_87e1_aa14b7fcef4f.slice. Nov 6 00:19:53.990329 systemd[1]: Created slice kubepods-besteffort-pod14475adc_4ac3_4f9b_9293_bb510ff52d31.slice - libcontainer container kubepods-besteffort-pod14475adc_4ac3_4f9b_9293_bb510ff52d31.slice. Nov 6 00:19:53.994900 containerd[1494]: time="2025-11-06T00:19:53.994851855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fsxfk,Uid:14475adc-4ac3-4f9b-9293-bb510ff52d31,Namespace:calico-system,Attempt:0,}" Nov 6 00:19:54.036446 kubelet[2686]: I1106 00:19:54.035525 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6hnz\" (UniqueName: \"kubernetes.io/projected/d092cd15-7a3f-47f6-bde9-78a01defdd36-kube-api-access-w6hnz\") pod \"goldmane-7c778bb748-fx2xl\" (UID: \"d092cd15-7a3f-47f6-bde9-78a01defdd36\") " pod="calico-system/goldmane-7c778bb748-fx2xl" Nov 6 00:19:54.037111 kubelet[2686]: I1106 00:19:54.037063 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a63c9258-20b6-4f02-98b1-7ffadf516e5e-config-volume\") pod \"coredns-66bc5c9577-nf9q8\" (UID: \"a63c9258-20b6-4f02-98b1-7ffadf516e5e\") " pod="kube-system/coredns-66bc5c9577-nf9q8" Nov 6 00:19:54.037358 kubelet[2686]: I1106 00:19:54.037113 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bc2bfd40-07fc-45df-b493-7140e7f7d72c-whisker-backend-key-pair\") pod \"whisker-5fbc5bf5dc-8xmbr\" (UID: \"bc2bfd40-07fc-45df-b493-7140e7f7d72c\") " pod="calico-system/whisker-5fbc5bf5dc-8xmbr" Nov 6 00:19:54.037410 kubelet[2686]: I1106 00:19:54.037364 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc2bfd40-07fc-45df-b493-7140e7f7d72c-whisker-ca-bundle\") pod \"whisker-5fbc5bf5dc-8xmbr\" (UID: \"bc2bfd40-07fc-45df-b493-7140e7f7d72c\") " pod="calico-system/whisker-5fbc5bf5dc-8xmbr" Nov 6 00:19:54.037410 kubelet[2686]: I1106 00:19:54.037383 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d092cd15-7a3f-47f6-bde9-78a01defdd36-goldmane-key-pair\") pod \"goldmane-7c778bb748-fx2xl\" (UID: \"d092cd15-7a3f-47f6-bde9-78a01defdd36\") " pod="calico-system/goldmane-7c778bb748-fx2xl" Nov 6 00:19:54.037543 kubelet[2686]: I1106 00:19:54.037528 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa727a0e-c8e3-4851-8c43-fa33e679ce52-tigera-ca-bundle\") pod \"calico-kube-controllers-8495cfffbb-59fst\" (UID: \"fa727a0e-c8e3-4851-8c43-fa33e679ce52\") " pod="calico-system/calico-kube-controllers-8495cfffbb-59fst" Nov 6 00:19:54.037583 kubelet[2686]: I1106 00:19:54.037558 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6nfl\" (UniqueName: \"kubernetes.io/projected/bc2bfd40-07fc-45df-b493-7140e7f7d72c-kube-api-access-t6nfl\") pod \"whisker-5fbc5bf5dc-8xmbr\" (UID: \"bc2bfd40-07fc-45df-b493-7140e7f7d72c\") " pod="calico-system/whisker-5fbc5bf5dc-8xmbr" Nov 6 00:19:54.037611 kubelet[2686]: I1106 00:19:54.037593 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68tkl\" (UniqueName: \"kubernetes.io/projected/fa727a0e-c8e3-4851-8c43-fa33e679ce52-kube-api-access-68tkl\") pod \"calico-kube-controllers-8495cfffbb-59fst\" (UID: \"fa727a0e-c8e3-4851-8c43-fa33e679ce52\") " pod="calico-system/calico-kube-controllers-8495cfffbb-59fst" Nov 6 00:19:54.037645 kubelet[2686]: I1106 00:19:54.037617 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp58v\" (UniqueName: \"kubernetes.io/projected/221557dd-55b6-4b8e-a63d-bc09352c8c41-kube-api-access-xp58v\") pod \"calico-apiserver-5765b6cb-8c67x\" (UID: \"221557dd-55b6-4b8e-a63d-bc09352c8c41\") " pod="calico-apiserver/calico-apiserver-5765b6cb-8c67x" Nov 6 00:19:54.037682 kubelet[2686]: I1106 00:19:54.037654 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz82d\" (UniqueName: \"kubernetes.io/projected/a63c9258-20b6-4f02-98b1-7ffadf516e5e-kube-api-access-gz82d\") pod \"coredns-66bc5c9577-nf9q8\" (UID: \"a63c9258-20b6-4f02-98b1-7ffadf516e5e\") " pod="kube-system/coredns-66bc5c9577-nf9q8" Nov 6 00:19:54.037682 kubelet[2686]: I1106 00:19:54.037669 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/22d06117-b04a-43e9-87e1-aa14b7fcef4f-calico-apiserver-certs\") pod \"calico-apiserver-5765b6cb-hzncn\" (UID: \"22d06117-b04a-43e9-87e1-aa14b7fcef4f\") " pod="calico-apiserver/calico-apiserver-5765b6cb-hzncn" Nov 6 00:19:54.037730 kubelet[2686]: I1106 00:19:54.037682 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79m2t\" (UniqueName: \"kubernetes.io/projected/22d06117-b04a-43e9-87e1-aa14b7fcef4f-kube-api-access-79m2t\") pod \"calico-apiserver-5765b6cb-hzncn\" (UID: \"22d06117-b04a-43e9-87e1-aa14b7fcef4f\") " pod="calico-apiserver/calico-apiserver-5765b6cb-hzncn" Nov 6 00:19:54.037730 kubelet[2686]: I1106 00:19:54.037701 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d092cd15-7a3f-47f6-bde9-78a01defdd36-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-fx2xl\" (UID: \"d092cd15-7a3f-47f6-bde9-78a01defdd36\") " pod="calico-system/goldmane-7c778bb748-fx2xl" Nov 6 00:19:54.037730 kubelet[2686]: I1106 00:19:54.037719 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/221557dd-55b6-4b8e-a63d-bc09352c8c41-calico-apiserver-certs\") pod \"calico-apiserver-5765b6cb-8c67x\" (UID: \"221557dd-55b6-4b8e-a63d-bc09352c8c41\") " pod="calico-apiserver/calico-apiserver-5765b6cb-8c67x" Nov 6 00:19:54.037807 kubelet[2686]: I1106 00:19:54.037733 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d092cd15-7a3f-47f6-bde9-78a01defdd36-config\") pod \"goldmane-7c778bb748-fx2xl\" (UID: \"d092cd15-7a3f-47f6-bde9-78a01defdd36\") " pod="calico-system/goldmane-7c778bb748-fx2xl" Nov 6 00:19:54.209012 kubelet[2686]: E1106 00:19:54.208758 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:54.217884 containerd[1494]: time="2025-11-06T00:19:54.217288352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jf6gr,Uid:e52578e9-6c1c-4ea4-bb10-32c4b4007c8c,Namespace:kube-system,Attempt:0,}" Nov 6 00:19:54.238736 kubelet[2686]: E1106 00:19:54.238702 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:54.240714 containerd[1494]: time="2025-11-06T00:19:54.240672801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nf9q8,Uid:a63c9258-20b6-4f02-98b1-7ffadf516e5e,Namespace:kube-system,Attempt:0,}" Nov 6 00:19:54.246912 kubelet[2686]: E1106 00:19:54.246848 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:54.255420 containerd[1494]: time="2025-11-06T00:19:54.253382123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 6 00:19:54.258330 containerd[1494]: time="2025-11-06T00:19:54.258089833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5765b6cb-8c67x,Uid:221557dd-55b6-4b8e-a63d-bc09352c8c41,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:19:54.260699 containerd[1494]: time="2025-11-06T00:19:54.260666795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8495cfffbb-59fst,Uid:fa727a0e-c8e3-4851-8c43-fa33e679ce52,Namespace:calico-system,Attempt:0,}" Nov 6 00:19:54.271324 containerd[1494]: time="2025-11-06T00:19:54.271278138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fbc5bf5dc-8xmbr,Uid:bc2bfd40-07fc-45df-b493-7140e7f7d72c,Namespace:calico-system,Attempt:0,}" Nov 6 00:19:54.279216 containerd[1494]: time="2025-11-06T00:19:54.279113138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-fx2xl,Uid:d092cd15-7a3f-47f6-bde9-78a01defdd36,Namespace:calico-system,Attempt:0,}" Nov 6 00:19:54.290757 containerd[1494]: time="2025-11-06T00:19:54.290069435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5765b6cb-hzncn,Uid:22d06117-b04a-43e9-87e1-aa14b7fcef4f,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:19:54.449156 containerd[1494]: time="2025-11-06T00:19:54.449017386Z" level=error msg="Failed to destroy network for sandbox \"eeb3a96f6d964af66ce123f059e283fad9dc050fb095526cd45128c0c7b9bf7a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:19:54.467430 containerd[1494]: time="2025-11-06T00:19:54.467178220Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fsxfk,Uid:14475adc-4ac3-4f9b-9293-bb510ff52d31,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeb3a96f6d964af66ce123f059e283fad9dc050fb095526cd45128c0c7b9bf7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:19:54.477435 kubelet[2686]: E1106 00:19:54.476447 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeb3a96f6d964af66ce123f059e283fad9dc050fb095526cd45128c0c7b9bf7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:19:54.477435 kubelet[2686]: E1106 00:19:54.476570 2686 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeb3a96f6d964af66ce123f059e283fad9dc050fb095526cd45128c0c7b9bf7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fsxfk" Nov 6 00:19:54.477435 kubelet[2686]: E1106 00:19:54.476601 2686 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeb3a96f6d964af66ce123f059e283fad9dc050fb095526cd45128c0c7b9bf7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fsxfk" Nov 6 00:19:54.477918 kubelet[2686]: E1106 00:19:54.476671 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fsxfk_calico-system(14475adc-4ac3-4f9b-9293-bb510ff52d31)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fsxfk_calico-system(14475adc-4ac3-4f9b-9293-bb510ff52d31)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eeb3a96f6d964af66ce123f059e283fad9dc050fb095526cd45128c0c7b9bf7a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fsxfk" podUID="14475adc-4ac3-4f9b-9293-bb510ff52d31" Nov 6 00:19:54.538855 containerd[1494]: time="2025-11-06T00:19:54.538802282Z" level=error msg="Failed to destroy network for sandbox \"b5625b880a78584c7c5f015119e5494e73142c4f8f1510421a187b612e10309c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:19:54.541923 containerd[1494]: time="2025-11-06T00:19:54.541240924Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5765b6cb-hzncn,Uid:22d06117-b04a-43e9-87e1-aa14b7fcef4f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5625b880a78584c7c5f015119e5494e73142c4f8f1510421a187b612e10309c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:19:54.542176 kubelet[2686]: E1106 00:19:54.541541 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5625b880a78584c7c5f015119e5494e73142c4f8f1510421a187b612e10309c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:19:54.542176 kubelet[2686]: E1106 00:19:54.541609 2686 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5625b880a78584c7c5f015119e5494e73142c4f8f1510421a187b612e10309c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5765b6cb-hzncn" Nov 6 00:19:54.542176 kubelet[2686]: E1106 00:19:54.541629 2686 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5625b880a78584c7c5f015119e5494e73142c4f8f1510421a187b612e10309c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5765b6cb-hzncn" Nov 6 00:19:54.542409 kubelet[2686]: E1106 00:19:54.541691 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5765b6cb-hzncn_calico-apiserver(22d06117-b04a-43e9-87e1-aa14b7fcef4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5765b6cb-hzncn_calico-apiserver(22d06117-b04a-43e9-87e1-aa14b7fcef4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5625b880a78584c7c5f015119e5494e73142c4f8f1510421a187b612e10309c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5765b6cb-hzncn" podUID="22d06117-b04a-43e9-87e1-aa14b7fcef4f" Nov 6 00:19:54.558752 containerd[1494]: time="2025-11-06T00:19:54.558676915Z" level=error msg="Failed to destroy network for sandbox \"aaec83ecb41cb545f4b5123e07fce52afdeef43ef31be5d04fec09ed0156a449\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:19:54.560108 containerd[1494]: time="2025-11-06T00:19:54.559985323Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5765b6cb-8c67x,Uid:221557dd-55b6-4b8e-a63d-bc09352c8c41,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaec83ecb41cb545f4b5123e07fce52afdeef43ef31be5d04fec09ed0156a449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:19:54.560359 kubelet[2686]: E1106 00:19:54.560270 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaec83ecb41cb545f4b5123e07fce52afdeef43ef31be5d04fec09ed0156a449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:19:54.560359 kubelet[2686]: E1106 00:19:54.560327 2686 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaec83ecb41cb545f4b5123e07fce52afdeef43ef31be5d04fec09ed0156a449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5765b6cb-8c67x" Nov 6 00:19:54.560359 kubelet[2686]: E1106 00:19:54.560350 2686 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaec83ecb41cb545f4b5123e07fce52afdeef43ef31be5d04fec09ed0156a449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5765b6cb-8c67x" Nov 6 00:19:54.560769 kubelet[2686]: E1106 00:19:54.560408 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5765b6cb-8c67x_calico-apiserver(221557dd-55b6-4b8e-a63d-bc09352c8c41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5765b6cb-8c67x_calico-apiserver(221557dd-55b6-4b8e-a63d-bc09352c8c41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aaec83ecb41cb545f4b5123e07fce52afdeef43ef31be5d04fec09ed0156a449\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5765b6cb-8c67x" podUID="221557dd-55b6-4b8e-a63d-bc09352c8c41" Nov 6 00:19:54.586966 containerd[1494]: time="2025-11-06T00:19:54.586911884Z" level=error msg="Failed to destroy network for sandbox \"32e00c105d1ff0a807ae4f646724913037d05468febb4c25c604a9fe43af3e5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:19:54.592963 containerd[1494]: time="2025-11-06T00:19:54.592126603Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jf6gr,Uid:e52578e9-6c1c-4ea4-bb10-32c4b4007c8c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"32e00c105d1ff0a807ae4f646724913037d05468febb4c25c604a9fe43af3e5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:19:54.593313 kubelet[2686]: E1106 00:19:54.592417 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32e00c105d1ff0a807ae4f646724913037d05468febb4c25c604a9fe43af3e5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:19:54.593313 kubelet[2686]: E1106 00:19:54.592490 2686 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32e00c105d1ff0a807ae4f646724913037d05468febb4c25c604a9fe43af3e5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-jf6gr" Nov 6 00:19:54.593313 kubelet[2686]: E1106 00:19:54.592525 2686 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32e00c105d1ff0a807ae4f646724913037d05468febb4c25c604a9fe43af3e5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-jf6gr" Nov 6 00:19:54.597396 kubelet[2686]: E1106 00:19:54.592637 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-jf6gr_kube-system(e52578e9-6c1c-4ea4-bb10-32c4b4007c8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-jf6gr_kube-system(e52578e9-6c1c-4ea4-bb10-32c4b4007c8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32e00c105d1ff0a807ae4f646724913037d05468febb4c25c604a9fe43af3e5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-jf6gr" podUID="e52578e9-6c1c-4ea4-bb10-32c4b4007c8c" Nov 6 00:19:54.611886 containerd[1494]: time="2025-11-06T00:19:54.611827810Z" level=error msg="Failed to destroy network for sandbox \"a211984da40022c62ad0ee383e457baa19d3157fcc4e68ef540f348b298e49fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:19:54.613032 containerd[1494]: time="2025-11-06T00:19:54.612945962Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8495cfffbb-59fst,Uid:fa727a0e-c8e3-4851-8c43-fa33e679ce52,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a211984da40022c62ad0ee383e457baa19d3157fcc4e68ef540f348b298e49fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:19:54.613338 kubelet[2686]: E1106 00:19:54.613295 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a211984da40022c62ad0ee383e457baa19d3157fcc4e68ef540f348b298e49fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:19:54.613415 kubelet[2686]: E1106 00:19:54.613362 2686 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a211984da40022c62ad0ee383e457baa19d3157fcc4e68ef540f348b298e49fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8495cfffbb-59fst" Nov 6 00:19:54.613415 kubelet[2686]: E1106 00:19:54.613383 2686 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a211984da40022c62ad0ee383e457baa19d3157fcc4e68ef540f348b298e49fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8495cfffbb-59fst" Nov 6 00:19:54.613496 kubelet[2686]: E1106 00:19:54.613441 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8495cfffbb-59fst_calico-system(fa727a0e-c8e3-4851-8c43-fa33e679ce52)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8495cfffbb-59fst_calico-system(fa727a0e-c8e3-4851-8c43-fa33e679ce52)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a211984da40022c62ad0ee383e457baa19d3157fcc4e68ef540f348b298e49fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8495cfffbb-59fst" podUID="fa727a0e-c8e3-4851-8c43-fa33e679ce52" Nov 6 00:19:54.615844 containerd[1494]: time="2025-11-06T00:19:54.615791843Z" level=error msg="Failed to destroy network for sandbox \"dfef9db57e6ec0cd14cba8ae6867a8509848bdfb1d3a3aeddaf9fb40750ce079\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:19:54.617148 containerd[1494]: time="2025-11-06T00:19:54.617107589Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nf9q8,Uid:a63c9258-20b6-4f02-98b1-7ffadf516e5e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfef9db57e6ec0cd14cba8ae6867a8509848bdfb1d3a3aeddaf9fb40750ce079\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:19:54.617847 kubelet[2686]: E1106 00:19:54.617608 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfef9db57e6ec0cd14cba8ae6867a8509848bdfb1d3a3aeddaf9fb40750ce079\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:19:54.617847 kubelet[2686]: E1106 00:19:54.617692 2686 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfef9db57e6ec0cd14cba8ae6867a8509848bdfb1d3a3aeddaf9fb40750ce079\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nf9q8" Nov 6 00:19:54.617847 kubelet[2686]: E1106 00:19:54.617731 2686 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfef9db57e6ec0cd14cba8ae6867a8509848bdfb1d3a3aeddaf9fb40750ce079\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nf9q8" Nov 6 00:19:54.618008 kubelet[2686]: E1106 00:19:54.617805 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-nf9q8_kube-system(a63c9258-20b6-4f02-98b1-7ffadf516e5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-nf9q8_kube-system(a63c9258-20b6-4f02-98b1-7ffadf516e5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dfef9db57e6ec0cd14cba8ae6867a8509848bdfb1d3a3aeddaf9fb40750ce079\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-nf9q8" podUID="a63c9258-20b6-4f02-98b1-7ffadf516e5e" Nov 6 00:19:54.622289 containerd[1494]: time="2025-11-06T00:19:54.622212021Z" level=error msg="Failed to destroy network for sandbox \"0d875f8687819aef630867285585a3a4e5aa6e3aaa66d71e4bcf0df7e60f88f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:19:54.623360 containerd[1494]: time="2025-11-06T00:19:54.623281623Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-fx2xl,Uid:d092cd15-7a3f-47f6-bde9-78a01defdd36,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d875f8687819aef630867285585a3a4e5aa6e3aaa66d71e4bcf0df7e60f88f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:19:54.624639 kubelet[2686]: E1106 00:19:54.624604 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d875f8687819aef630867285585a3a4e5aa6e3aaa66d71e4bcf0df7e60f88f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:19:54.624860 kubelet[2686]: E1106 00:19:54.624820 2686 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d875f8687819aef630867285585a3a4e5aa6e3aaa66d71e4bcf0df7e60f88f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-fx2xl" Nov 6 00:19:54.625031 kubelet[2686]: E1106 00:19:54.625012 2686 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d875f8687819aef630867285585a3a4e5aa6e3aaa66d71e4bcf0df7e60f88f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-fx2xl" Nov 6 00:19:54.625205 containerd[1494]: time="2025-11-06T00:19:54.625178204Z" level=error msg="Failed to destroy network for sandbox \"f3594df8967daa05257ee04b42079725b9cd8838943990f3046e94aaef7bad6a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:19:54.625405 kubelet[2686]: E1106 00:19:54.625377 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-fx2xl_calico-system(d092cd15-7a3f-47f6-bde9-78a01defdd36)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-fx2xl_calico-system(d092cd15-7a3f-47f6-bde9-78a01defdd36)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d875f8687819aef630867285585a3a4e5aa6e3aaa66d71e4bcf0df7e60f88f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-fx2xl" podUID="d092cd15-7a3f-47f6-bde9-78a01defdd36" Nov 6 00:19:54.626044 containerd[1494]: time="2025-11-06T00:19:54.626007747Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fbc5bf5dc-8xmbr,Uid:bc2bfd40-07fc-45df-b493-7140e7f7d72c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3594df8967daa05257ee04b42079725b9cd8838943990f3046e94aaef7bad6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:19:54.626779 kubelet[2686]: E1106 00:19:54.626722 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3594df8967daa05257ee04b42079725b9cd8838943990f3046e94aaef7bad6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:19:54.626944 kubelet[2686]: E1106 00:19:54.626849 2686 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3594df8967daa05257ee04b42079725b9cd8838943990f3046e94aaef7bad6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5fbc5bf5dc-8xmbr" Nov 6 00:19:54.626944 kubelet[2686]: E1106 00:19:54.626872 2686 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3594df8967daa05257ee04b42079725b9cd8838943990f3046e94aaef7bad6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5fbc5bf5dc-8xmbr" Nov 6 00:19:54.627198 kubelet[2686]: E1106 00:19:54.627024 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5fbc5bf5dc-8xmbr_calico-system(bc2bfd40-07fc-45df-b493-7140e7f7d72c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5fbc5bf5dc-8xmbr_calico-system(bc2bfd40-07fc-45df-b493-7140e7f7d72c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3594df8967daa05257ee04b42079725b9cd8838943990f3046e94aaef7bad6a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5fbc5bf5dc-8xmbr" podUID="bc2bfd40-07fc-45df-b493-7140e7f7d72c" Nov 6 00:19:55.017506 systemd[1]: run-netns-cni\x2d77d22710\x2da788\x2db7da\x2dac2c\x2d6a08ecb3ef3a.mount: Deactivated successfully. Nov 6 00:19:55.235994 kubelet[2686]: I1106 00:19:55.235772 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 00:19:55.238145 kubelet[2686]: E1106 00:19:55.237819 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:55.252868 kubelet[2686]: E1106 00:19:55.252801 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:19:59.957359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1630776409.mount: Deactivated successfully. Nov 6 00:20:00.274186 containerd[1494]: time="2025-11-06T00:20:00.213734982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 6 00:20:00.299509 containerd[1494]: time="2025-11-06T00:20:00.298458340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:20:00.359764 containerd[1494]: time="2025-11-06T00:20:00.359654607Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:20:00.362166 containerd[1494]: time="2025-11-06T00:20:00.362083425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:20:00.363556 containerd[1494]: time="2025-11-06T00:20:00.363141058Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.109689269s" Nov 6 00:20:00.363556 containerd[1494]: time="2025-11-06T00:20:00.363200357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 6 00:20:00.426777 containerd[1494]: time="2025-11-06T00:20:00.426658355Z" level=info msg="CreateContainer within sandbox \"550192ae9d9b31ca54e5025a7f3d9f97b7122e41a3dc4a46c054cb0385644eca\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 6 00:20:00.516596 containerd[1494]: time="2025-11-06T00:20:00.512184439Z" level=info msg="Container 073e6cdb9eca86418c71a81787240824f8d9c13ee9c7a71a55dfe70893bcd73b: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:20:00.517135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3632467186.mount: Deactivated successfully. Nov 6 00:20:00.565744 containerd[1494]: time="2025-11-06T00:20:00.565026737Z" level=info msg="CreateContainer within sandbox \"550192ae9d9b31ca54e5025a7f3d9f97b7122e41a3dc4a46c054cb0385644eca\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"073e6cdb9eca86418c71a81787240824f8d9c13ee9c7a71a55dfe70893bcd73b\"" Nov 6 00:20:00.567774 containerd[1494]: time="2025-11-06T00:20:00.567712163Z" level=info msg="StartContainer for \"073e6cdb9eca86418c71a81787240824f8d9c13ee9c7a71a55dfe70893bcd73b\"" Nov 6 00:20:00.585027 containerd[1494]: time="2025-11-06T00:20:00.584954444Z" level=info msg="connecting to shim 073e6cdb9eca86418c71a81787240824f8d9c13ee9c7a71a55dfe70893bcd73b" address="unix:///run/containerd/s/1f5003c61f8782dec9c9b31b2725abe25022388c1776f15448f430e720179b7f" protocol=ttrpc version=3 Nov 6 00:20:00.715839 systemd[1]: Started cri-containerd-073e6cdb9eca86418c71a81787240824f8d9c13ee9c7a71a55dfe70893bcd73b.scope - libcontainer container 073e6cdb9eca86418c71a81787240824f8d9c13ee9c7a71a55dfe70893bcd73b. Nov 6 00:20:00.785836 containerd[1494]: time="2025-11-06T00:20:00.785082296Z" level=info msg="StartContainer for \"073e6cdb9eca86418c71a81787240824f8d9c13ee9c7a71a55dfe70893bcd73b\" returns successfully" Nov 6 00:20:00.943299 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 6 00:20:00.944886 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 6 00:20:01.306056 kubelet[2686]: I1106 00:20:01.303750 2686 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6nfl\" (UniqueName: \"kubernetes.io/projected/bc2bfd40-07fc-45df-b493-7140e7f7d72c-kube-api-access-t6nfl\") pod \"bc2bfd40-07fc-45df-b493-7140e7f7d72c\" (UID: \"bc2bfd40-07fc-45df-b493-7140e7f7d72c\") " Nov 6 00:20:01.306056 kubelet[2686]: I1106 00:20:01.303794 2686 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bc2bfd40-07fc-45df-b493-7140e7f7d72c-whisker-backend-key-pair\") pod \"bc2bfd40-07fc-45df-b493-7140e7f7d72c\" (UID: \"bc2bfd40-07fc-45df-b493-7140e7f7d72c\") " Nov 6 00:20:01.306056 kubelet[2686]: I1106 00:20:01.303843 2686 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc2bfd40-07fc-45df-b493-7140e7f7d72c-whisker-ca-bundle\") pod \"bc2bfd40-07fc-45df-b493-7140e7f7d72c\" (UID: \"bc2bfd40-07fc-45df-b493-7140e7f7d72c\") " Nov 6 00:20:01.306056 kubelet[2686]: I1106 00:20:01.304489 2686 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2bfd40-07fc-45df-b493-7140e7f7d72c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "bc2bfd40-07fc-45df-b493-7140e7f7d72c" (UID: "bc2bfd40-07fc-45df-b493-7140e7f7d72c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 00:20:01.323877 systemd[1]: var-lib-kubelet-pods-bc2bfd40\x2d07fc\x2d45df\x2db493\x2d7140e7f7d72c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 6 00:20:01.327381 kubelet[2686]: I1106 00:20:01.326696 2686 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc2bfd40-07fc-45df-b493-7140e7f7d72c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "bc2bfd40-07fc-45df-b493-7140e7f7d72c" (UID: "bc2bfd40-07fc-45df-b493-7140e7f7d72c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 6 00:20:01.334274 kubelet[2686]: I1106 00:20:01.334211 2686 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2bfd40-07fc-45df-b493-7140e7f7d72c-kube-api-access-t6nfl" (OuterVolumeSpecName: "kube-api-access-t6nfl") pod "bc2bfd40-07fc-45df-b493-7140e7f7d72c" (UID: "bc2bfd40-07fc-45df-b493-7140e7f7d72c"). InnerVolumeSpecName "kube-api-access-t6nfl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 00:20:01.341354 systemd[1]: var-lib-kubelet-pods-bc2bfd40\x2d07fc\x2d45df\x2db493\x2d7140e7f7d72c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt6nfl.mount: Deactivated successfully. Nov 6 00:20:01.351307 kubelet[2686]: E1106 00:20:01.351251 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:20:01.375976 systemd[1]: Removed slice kubepods-besteffort-podbc2bfd40_07fc_45df_b493_7140e7f7d72c.slice - libcontainer container kubepods-besteffort-podbc2bfd40_07fc_45df_b493_7140e7f7d72c.slice. Nov 6 00:20:01.406572 kubelet[2686]: I1106 00:20:01.406246 2686 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc2bfd40-07fc-45df-b493-7140e7f7d72c-whisker-ca-bundle\") on node \"ci-4459.1.0-n-46450dc2d5\" DevicePath \"\"" Nov 6 00:20:01.406572 kubelet[2686]: I1106 00:20:01.406297 2686 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t6nfl\" (UniqueName: \"kubernetes.io/projected/bc2bfd40-07fc-45df-b493-7140e7f7d72c-kube-api-access-t6nfl\") on node \"ci-4459.1.0-n-46450dc2d5\" DevicePath \"\"" Nov 6 00:20:01.406572 kubelet[2686]: I1106 00:20:01.406313 2686 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bc2bfd40-07fc-45df-b493-7140e7f7d72c-whisker-backend-key-pair\") on node \"ci-4459.1.0-n-46450dc2d5\" DevicePath \"\"" Nov 6 00:20:01.416888 kubelet[2686]: I1106 00:20:01.406248 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ndvs9" podStartSLOduration=2.581681285 podStartE2EDuration="18.406222489s" podCreationTimestamp="2025-11-06 00:19:43 +0000 UTC" firstStartedPulling="2025-11-06 00:19:44.561616239 +0000 UTC m=+23.765475751" lastFinishedPulling="2025-11-06 00:20:00.386157426 +0000 UTC m=+39.590016955" observedRunningTime="2025-11-06 00:20:01.402202116 +0000 UTC m=+40.606061655" watchObservedRunningTime="2025-11-06 00:20:01.406222489 +0000 UTC m=+40.610082019" Nov 6 00:20:01.631715 systemd[1]: Created slice kubepods-besteffort-podcfa7aa38_883a_4fbf_a2b0_653ce5e79003.slice - libcontainer container kubepods-besteffort-podcfa7aa38_883a_4fbf_a2b0_653ce5e79003.slice. Nov 6 00:20:01.711360 kubelet[2686]: I1106 00:20:01.711298 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cfa7aa38-883a-4fbf-a2b0-653ce5e79003-whisker-backend-key-pair\") pod \"whisker-66d8bc69bc-fhr2k\" (UID: \"cfa7aa38-883a-4fbf-a2b0-653ce5e79003\") " pod="calico-system/whisker-66d8bc69bc-fhr2k" Nov 6 00:20:01.711737 kubelet[2686]: I1106 00:20:01.711686 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfa7aa38-883a-4fbf-a2b0-653ce5e79003-whisker-ca-bundle\") pod \"whisker-66d8bc69bc-fhr2k\" (UID: \"cfa7aa38-883a-4fbf-a2b0-653ce5e79003\") " pod="calico-system/whisker-66d8bc69bc-fhr2k" Nov 6 00:20:01.712017 kubelet[2686]: I1106 00:20:01.711969 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4tk5\" (UniqueName: \"kubernetes.io/projected/cfa7aa38-883a-4fbf-a2b0-653ce5e79003-kube-api-access-q4tk5\") pod \"whisker-66d8bc69bc-fhr2k\" (UID: \"cfa7aa38-883a-4fbf-a2b0-653ce5e79003\") " pod="calico-system/whisker-66d8bc69bc-fhr2k" Nov 6 00:20:01.953046 containerd[1494]: time="2025-11-06T00:20:01.952613714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66d8bc69bc-fhr2k,Uid:cfa7aa38-883a-4fbf-a2b0-653ce5e79003,Namespace:calico-system,Attempt:0,}" Nov 6 00:20:02.360526 kubelet[2686]: I1106 00:20:02.359710 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 00:20:02.362004 kubelet[2686]: E1106 00:20:02.360611 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:20:02.658311 systemd-networkd[1422]: cali02602cd3230: Link UP Nov 6 00:20:02.658771 systemd-networkd[1422]: cali02602cd3230: Gained carrier Nov 6 00:20:02.691546 containerd[1494]: 2025-11-06 00:20:02.048 [INFO][3807] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 6 00:20:02.691546 containerd[1494]: 2025-11-06 00:20:02.151 [INFO][3807] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--46450dc2d5-k8s-whisker--66d8bc69bc--fhr2k-eth0 whisker-66d8bc69bc- calico-system cfa7aa38-883a-4fbf-a2b0-653ce5e79003 909 0 2025-11-06 00:20:01 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:66d8bc69bc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.1.0-n-46450dc2d5 whisker-66d8bc69bc-fhr2k eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali02602cd3230 [] [] }} ContainerID="0b031adc531497cf527bc7cb5871ee1e7d969415ef0c414a602eebed1c3ae174" Namespace="calico-system" Pod="whisker-66d8bc69bc-fhr2k" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-whisker--66d8bc69bc--fhr2k-" Nov 6 00:20:02.691546 containerd[1494]: 2025-11-06 00:20:02.151 [INFO][3807] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b031adc531497cf527bc7cb5871ee1e7d969415ef0c414a602eebed1c3ae174" Namespace="calico-system" Pod="whisker-66d8bc69bc-fhr2k" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-whisker--66d8bc69bc--fhr2k-eth0" Nov 6 00:20:02.691546 containerd[1494]: 2025-11-06 00:20:02.511 [INFO][3818] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b031adc531497cf527bc7cb5871ee1e7d969415ef0c414a602eebed1c3ae174" HandleID="k8s-pod-network.0b031adc531497cf527bc7cb5871ee1e7d969415ef0c414a602eebed1c3ae174" Workload="ci--4459.1.0--n--46450dc2d5-k8s-whisker--66d8bc69bc--fhr2k-eth0" Nov 6 00:20:02.692392 containerd[1494]: 2025-11-06 00:20:02.515 [INFO][3818] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0b031adc531497cf527bc7cb5871ee1e7d969415ef0c414a602eebed1c3ae174" HandleID="k8s-pod-network.0b031adc531497cf527bc7cb5871ee1e7d969415ef0c414a602eebed1c3ae174" Workload="ci--4459.1.0--n--46450dc2d5-k8s-whisker--66d8bc69bc--fhr2k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fe30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-n-46450dc2d5", "pod":"whisker-66d8bc69bc-fhr2k", "timestamp":"2025-11-06 00:20:02.511148829 +0000 UTC"}, Hostname:"ci-4459.1.0-n-46450dc2d5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:20:02.692392 containerd[1494]: 2025-11-06 00:20:02.515 [INFO][3818] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:20:02.692392 containerd[1494]: 2025-11-06 00:20:02.516 [INFO][3818] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:20:02.692392 containerd[1494]: 2025-11-06 00:20:02.517 [INFO][3818] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-46450dc2d5' Nov 6 00:20:02.692392 containerd[1494]: 2025-11-06 00:20:02.560 [INFO][3818] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0b031adc531497cf527bc7cb5871ee1e7d969415ef0c414a602eebed1c3ae174" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:02.692392 containerd[1494]: 2025-11-06 00:20:02.580 [INFO][3818] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:02.692392 containerd[1494]: 2025-11-06 00:20:02.594 [INFO][3818] ipam/ipam.go 511: Trying affinity for 192.168.71.192/26 host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:02.692392 containerd[1494]: 2025-11-06 00:20:02.599 [INFO][3818] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.192/26 host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:02.692392 containerd[1494]: 2025-11-06 00:20:02.605 [INFO][3818] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.192/26 host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:02.692913 containerd[1494]: 2025-11-06 00:20:02.605 [INFO][3818] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.192/26 handle="k8s-pod-network.0b031adc531497cf527bc7cb5871ee1e7d969415ef0c414a602eebed1c3ae174" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:02.692913 containerd[1494]: 2025-11-06 00:20:02.609 [INFO][3818] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0b031adc531497cf527bc7cb5871ee1e7d969415ef0c414a602eebed1c3ae174 Nov 6 00:20:02.692913 containerd[1494]: 2025-11-06 00:20:02.617 [INFO][3818] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.192/26 handle="k8s-pod-network.0b031adc531497cf527bc7cb5871ee1e7d969415ef0c414a602eebed1c3ae174" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:02.692913 containerd[1494]: 2025-11-06 00:20:02.629 [INFO][3818] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.193/26] block=192.168.71.192/26 handle="k8s-pod-network.0b031adc531497cf527bc7cb5871ee1e7d969415ef0c414a602eebed1c3ae174" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:02.692913 containerd[1494]: 2025-11-06 00:20:02.629 [INFO][3818] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.193/26] handle="k8s-pod-network.0b031adc531497cf527bc7cb5871ee1e7d969415ef0c414a602eebed1c3ae174" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:02.692913 containerd[1494]: 2025-11-06 00:20:02.629 [INFO][3818] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:20:02.692913 containerd[1494]: 2025-11-06 00:20:02.629 [INFO][3818] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.193/26] IPv6=[] ContainerID="0b031adc531497cf527bc7cb5871ee1e7d969415ef0c414a602eebed1c3ae174" HandleID="k8s-pod-network.0b031adc531497cf527bc7cb5871ee1e7d969415ef0c414a602eebed1c3ae174" Workload="ci--4459.1.0--n--46450dc2d5-k8s-whisker--66d8bc69bc--fhr2k-eth0" Nov 6 00:20:02.693222 containerd[1494]: 2025-11-06 00:20:02.633 [INFO][3807] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b031adc531497cf527bc7cb5871ee1e7d969415ef0c414a602eebed1c3ae174" Namespace="calico-system" Pod="whisker-66d8bc69bc-fhr2k" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-whisker--66d8bc69bc--fhr2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--46450dc2d5-k8s-whisker--66d8bc69bc--fhr2k-eth0", GenerateName:"whisker-66d8bc69bc-", Namespace:"calico-system", SelfLink:"", UID:"cfa7aa38-883a-4fbf-a2b0-653ce5e79003", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 20, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66d8bc69bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-46450dc2d5", ContainerID:"", Pod:"whisker-66d8bc69bc-fhr2k", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.71.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali02602cd3230", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:20:02.693222 containerd[1494]: 2025-11-06 00:20:02.634 [INFO][3807] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.193/32] ContainerID="0b031adc531497cf527bc7cb5871ee1e7d969415ef0c414a602eebed1c3ae174" Namespace="calico-system" Pod="whisker-66d8bc69bc-fhr2k" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-whisker--66d8bc69bc--fhr2k-eth0" Nov 6 00:20:02.693378 containerd[1494]: 2025-11-06 00:20:02.634 [INFO][3807] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali02602cd3230 ContainerID="0b031adc531497cf527bc7cb5871ee1e7d969415ef0c414a602eebed1c3ae174" Namespace="calico-system" Pod="whisker-66d8bc69bc-fhr2k" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-whisker--66d8bc69bc--fhr2k-eth0" Nov 6 00:20:02.693378 containerd[1494]: 2025-11-06 00:20:02.650 [INFO][3807] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b031adc531497cf527bc7cb5871ee1e7d969415ef0c414a602eebed1c3ae174" Namespace="calico-system" Pod="whisker-66d8bc69bc-fhr2k" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-whisker--66d8bc69bc--fhr2k-eth0" Nov 6 00:20:02.693516 containerd[1494]: 2025-11-06 00:20:02.651 [INFO][3807] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b031adc531497cf527bc7cb5871ee1e7d969415ef0c414a602eebed1c3ae174" Namespace="calico-system" Pod="whisker-66d8bc69bc-fhr2k" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-whisker--66d8bc69bc--fhr2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--46450dc2d5-k8s-whisker--66d8bc69bc--fhr2k-eth0", GenerateName:"whisker-66d8bc69bc-", Namespace:"calico-system", SelfLink:"", UID:"cfa7aa38-883a-4fbf-a2b0-653ce5e79003", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 20, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66d8bc69bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-46450dc2d5", ContainerID:"0b031adc531497cf527bc7cb5871ee1e7d969415ef0c414a602eebed1c3ae174", Pod:"whisker-66d8bc69bc-fhr2k", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.71.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali02602cd3230", MAC:"ea:51:83:78:b1:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:20:02.694130 containerd[1494]: 2025-11-06 00:20:02.673 [INFO][3807] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b031adc531497cf527bc7cb5871ee1e7d969415ef0c414a602eebed1c3ae174" Namespace="calico-system" Pod="whisker-66d8bc69bc-fhr2k" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-whisker--66d8bc69bc--fhr2k-eth0" Nov 6 00:20:02.875519 containerd[1494]: time="2025-11-06T00:20:02.875379423Z" level=info msg="connecting to shim 0b031adc531497cf527bc7cb5871ee1e7d969415ef0c414a602eebed1c3ae174" address="unix:///run/containerd/s/6ae48afdaa14c251062da09a7e77c4b471e357c65be68985c98832bb4b286894" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:20:02.939072 systemd[1]: Started cri-containerd-0b031adc531497cf527bc7cb5871ee1e7d969415ef0c414a602eebed1c3ae174.scope - libcontainer container 0b031adc531497cf527bc7cb5871ee1e7d969415ef0c414a602eebed1c3ae174. Nov 6 00:20:03.012503 kubelet[2686]: I1106 00:20:03.011852 2686 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc2bfd40-07fc-45df-b493-7140e7f7d72c" path="/var/lib/kubelet/pods/bc2bfd40-07fc-45df-b493-7140e7f7d72c/volumes" Nov 6 00:20:03.164719 containerd[1494]: time="2025-11-06T00:20:03.164655816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66d8bc69bc-fhr2k,Uid:cfa7aa38-883a-4fbf-a2b0-653ce5e79003,Namespace:calico-system,Attempt:0,} returns sandbox id \"0b031adc531497cf527bc7cb5871ee1e7d969415ef0c414a602eebed1c3ae174\"" Nov 6 00:20:03.207103 containerd[1494]: time="2025-11-06T00:20:03.206952885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:20:03.587365 containerd[1494]: time="2025-11-06T00:20:03.587182077Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:20:03.595577 containerd[1494]: time="2025-11-06T00:20:03.589002088Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:20:03.595577 containerd[1494]: time="2025-11-06T00:20:03.589009670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:20:03.597061 kubelet[2686]: E1106 00:20:03.596151 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:20:03.597061 kubelet[2686]: E1106 00:20:03.596212 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:20:03.597061 kubelet[2686]: E1106 00:20:03.596318 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-66d8bc69bc-fhr2k_calico-system(cfa7aa38-883a-4fbf-a2b0-653ce5e79003): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:20:03.603179 containerd[1494]: time="2025-11-06T00:20:03.602635406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:20:03.628141 kubelet[2686]: I1106 00:20:03.628083 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 00:20:03.629383 kubelet[2686]: E1106 00:20:03.629311 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:20:03.979233 containerd[1494]: time="2025-11-06T00:20:03.979022983Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:20:03.993190 containerd[1494]: time="2025-11-06T00:20:03.993085991Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:20:03.993395 containerd[1494]: time="2025-11-06T00:20:03.993143259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:20:03.993592 kubelet[2686]: E1106 00:20:03.993505 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:20:03.993592 kubelet[2686]: E1106 00:20:03.993561 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:20:03.993735 kubelet[2686]: E1106 00:20:03.993651 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-66d8bc69bc-fhr2k_calico-system(cfa7aa38-883a-4fbf-a2b0-653ce5e79003): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:20:03.993735 kubelet[2686]: E1106 00:20:03.993700 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66d8bc69bc-fhr2k" podUID="cfa7aa38-883a-4fbf-a2b0-653ce5e79003" Nov 6 00:20:04.072454 containerd[1494]: time="2025-11-06T00:20:04.072390330Z" level=info msg="TaskExit event in podsandbox handler container_id:\"073e6cdb9eca86418c71a81787240824f8d9c13ee9c7a71a55dfe70893bcd73b\" id:\"64e5f68d437b88232e674c1333c98df093d52f3f76e39b17cde0e143a40016bd\" pid:3992 exit_status:1 exited_at:{seconds:1762388404 nanos:59394480}" Nov 6 00:20:04.234885 containerd[1494]: time="2025-11-06T00:20:04.234288379Z" level=info msg="TaskExit event in podsandbox handler container_id:\"073e6cdb9eca86418c71a81787240824f8d9c13ee9c7a71a55dfe70893bcd73b\" id:\"45e812ef2bc845e13aebaefea1c5496ff375ba0b1014b949e9541daf746c000c\" pid:4035 exit_status:1 exited_at:{seconds:1762388404 nanos:233551548}" Nov 6 00:20:04.384433 kubelet[2686]: E1106 00:20:04.384381 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66d8bc69bc-fhr2k" podUID="cfa7aa38-883a-4fbf-a2b0-653ce5e79003" Nov 6 00:20:04.510450 systemd-networkd[1422]: cali02602cd3230: Gained IPv6LL Nov 6 00:20:04.532601 systemd-networkd[1422]: vxlan.calico: Link UP Nov 6 00:20:04.532631 systemd-networkd[1422]: vxlan.calico: Gained carrier Nov 6 00:20:04.985046 containerd[1494]: time="2025-11-06T00:20:04.984550798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5765b6cb-hzncn,Uid:22d06117-b04a-43e9-87e1-aa14b7fcef4f,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:20:04.988841 containerd[1494]: time="2025-11-06T00:20:04.988788692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-fx2xl,Uid:d092cd15-7a3f-47f6-bde9-78a01defdd36,Namespace:calico-system,Attempt:0,}" Nov 6 00:20:05.329004 systemd-networkd[1422]: calibe144ee7f7b: Link UP Nov 6 00:20:05.331327 systemd-networkd[1422]: calibe144ee7f7b: Gained carrier Nov 6 00:20:05.359606 containerd[1494]: 2025-11-06 00:20:05.120 [INFO][4125] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--46450dc2d5-k8s-calico--apiserver--5765b6cb--hzncn-eth0 calico-apiserver-5765b6cb- calico-apiserver 22d06117-b04a-43e9-87e1-aa14b7fcef4f 840 0 2025-11-06 00:19:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5765b6cb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.1.0-n-46450dc2d5 calico-apiserver-5765b6cb-hzncn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibe144ee7f7b [] [] }} ContainerID="05136469dda2f6fc819e244cff2068f15a7f0a554a79d1774d89724a3d222665" Namespace="calico-apiserver" Pod="calico-apiserver-5765b6cb-hzncn" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-calico--apiserver--5765b6cb--hzncn-" Nov 6 00:20:05.359606 containerd[1494]: 2025-11-06 00:20:05.122 [INFO][4125] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="05136469dda2f6fc819e244cff2068f15a7f0a554a79d1774d89724a3d222665" Namespace="calico-apiserver" Pod="calico-apiserver-5765b6cb-hzncn" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-calico--apiserver--5765b6cb--hzncn-eth0" Nov 6 00:20:05.359606 containerd[1494]: 2025-11-06 00:20:05.242 [INFO][4145] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="05136469dda2f6fc819e244cff2068f15a7f0a554a79d1774d89724a3d222665" HandleID="k8s-pod-network.05136469dda2f6fc819e244cff2068f15a7f0a554a79d1774d89724a3d222665" Workload="ci--4459.1.0--n--46450dc2d5-k8s-calico--apiserver--5765b6cb--hzncn-eth0" Nov 6 00:20:05.361551 containerd[1494]: 2025-11-06 00:20:05.242 [INFO][4145] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="05136469dda2f6fc819e244cff2068f15a7f0a554a79d1774d89724a3d222665" HandleID="k8s-pod-network.05136469dda2f6fc819e244cff2068f15a7f0a554a79d1774d89724a3d222665" Workload="ci--4459.1.0--n--46450dc2d5-k8s-calico--apiserver--5765b6cb--hzncn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ac3f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.1.0-n-46450dc2d5", "pod":"calico-apiserver-5765b6cb-hzncn", "timestamp":"2025-11-06 00:20:05.242167312 +0000 UTC"}, Hostname:"ci-4459.1.0-n-46450dc2d5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:20:05.361551 containerd[1494]: 2025-11-06 00:20:05.242 [INFO][4145] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:20:05.361551 containerd[1494]: 2025-11-06 00:20:05.242 [INFO][4145] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:20:05.361551 containerd[1494]: 2025-11-06 00:20:05.242 [INFO][4145] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-46450dc2d5' Nov 6 00:20:05.361551 containerd[1494]: 2025-11-06 00:20:05.255 [INFO][4145] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.05136469dda2f6fc819e244cff2068f15a7f0a554a79d1774d89724a3d222665" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:05.361551 containerd[1494]: 2025-11-06 00:20:05.270 [INFO][4145] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:05.361551 containerd[1494]: 2025-11-06 00:20:05.282 [INFO][4145] ipam/ipam.go 511: Trying affinity for 192.168.71.192/26 host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:05.361551 containerd[1494]: 2025-11-06 00:20:05.286 [INFO][4145] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.192/26 host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:05.361551 containerd[1494]: 2025-11-06 00:20:05.290 [INFO][4145] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.192/26 host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:05.361931 containerd[1494]: 2025-11-06 00:20:05.290 [INFO][4145] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.192/26 handle="k8s-pod-network.05136469dda2f6fc819e244cff2068f15a7f0a554a79d1774d89724a3d222665" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:05.361931 containerd[1494]: 2025-11-06 00:20:05.293 [INFO][4145] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.05136469dda2f6fc819e244cff2068f15a7f0a554a79d1774d89724a3d222665 Nov 6 00:20:05.361931 containerd[1494]: 2025-11-06 00:20:05.299 [INFO][4145] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.192/26 handle="k8s-pod-network.05136469dda2f6fc819e244cff2068f15a7f0a554a79d1774d89724a3d222665" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:05.361931 containerd[1494]: 2025-11-06 00:20:05.308 [INFO][4145] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.194/26] block=192.168.71.192/26 handle="k8s-pod-network.05136469dda2f6fc819e244cff2068f15a7f0a554a79d1774d89724a3d222665" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:05.361931 containerd[1494]: 2025-11-06 00:20:05.308 [INFO][4145] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.194/26] handle="k8s-pod-network.05136469dda2f6fc819e244cff2068f15a7f0a554a79d1774d89724a3d222665" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:05.361931 containerd[1494]: 2025-11-06 00:20:05.308 [INFO][4145] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:20:05.361931 containerd[1494]: 2025-11-06 00:20:05.308 [INFO][4145] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.194/26] IPv6=[] ContainerID="05136469dda2f6fc819e244cff2068f15a7f0a554a79d1774d89724a3d222665" HandleID="k8s-pod-network.05136469dda2f6fc819e244cff2068f15a7f0a554a79d1774d89724a3d222665" Workload="ci--4459.1.0--n--46450dc2d5-k8s-calico--apiserver--5765b6cb--hzncn-eth0" Nov 6 00:20:05.363764 containerd[1494]: 2025-11-06 00:20:05.316 [INFO][4125] cni-plugin/k8s.go 418: Populated endpoint ContainerID="05136469dda2f6fc819e244cff2068f15a7f0a554a79d1774d89724a3d222665" Namespace="calico-apiserver" Pod="calico-apiserver-5765b6cb-hzncn" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-calico--apiserver--5765b6cb--hzncn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--46450dc2d5-k8s-calico--apiserver--5765b6cb--hzncn-eth0", GenerateName:"calico-apiserver-5765b6cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"22d06117-b04a-43e9-87e1-aa14b7fcef4f", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 19, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5765b6cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-46450dc2d5", ContainerID:"", Pod:"calico-apiserver-5765b6cb-hzncn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe144ee7f7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:20:05.363960 containerd[1494]: 2025-11-06 00:20:05.317 [INFO][4125] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.194/32] ContainerID="05136469dda2f6fc819e244cff2068f15a7f0a554a79d1774d89724a3d222665" Namespace="calico-apiserver" Pod="calico-apiserver-5765b6cb-hzncn" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-calico--apiserver--5765b6cb--hzncn-eth0" Nov 6 00:20:05.363960 containerd[1494]: 2025-11-06 00:20:05.317 [INFO][4125] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe144ee7f7b ContainerID="05136469dda2f6fc819e244cff2068f15a7f0a554a79d1774d89724a3d222665" Namespace="calico-apiserver" Pod="calico-apiserver-5765b6cb-hzncn" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-calico--apiserver--5765b6cb--hzncn-eth0" Nov 6 00:20:05.363960 containerd[1494]: 2025-11-06 00:20:05.332 [INFO][4125] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="05136469dda2f6fc819e244cff2068f15a7f0a554a79d1774d89724a3d222665" Namespace="calico-apiserver" Pod="calico-apiserver-5765b6cb-hzncn" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-calico--apiserver--5765b6cb--hzncn-eth0" Nov 6 00:20:05.364051 containerd[1494]: 2025-11-06 00:20:05.335 [INFO][4125] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="05136469dda2f6fc819e244cff2068f15a7f0a554a79d1774d89724a3d222665" Namespace="calico-apiserver" Pod="calico-apiserver-5765b6cb-hzncn" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-calico--apiserver--5765b6cb--hzncn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--46450dc2d5-k8s-calico--apiserver--5765b6cb--hzncn-eth0", GenerateName:"calico-apiserver-5765b6cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"22d06117-b04a-43e9-87e1-aa14b7fcef4f", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 19, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5765b6cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-46450dc2d5", ContainerID:"05136469dda2f6fc819e244cff2068f15a7f0a554a79d1774d89724a3d222665", Pod:"calico-apiserver-5765b6cb-hzncn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe144ee7f7b", MAC:"e2:24:4b:b9:ab:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:20:05.364115 containerd[1494]: 2025-11-06 00:20:05.351 [INFO][4125] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="05136469dda2f6fc819e244cff2068f15a7f0a554a79d1774d89724a3d222665" Namespace="calico-apiserver" Pod="calico-apiserver-5765b6cb-hzncn" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-calico--apiserver--5765b6cb--hzncn-eth0" Nov 6 00:20:05.423065 containerd[1494]: time="2025-11-06T00:20:05.421838326Z" level=info msg="connecting to shim 05136469dda2f6fc819e244cff2068f15a7f0a554a79d1774d89724a3d222665" address="unix:///run/containerd/s/c52724d5c261551e4ff323daa52e43b5c466f257a0ce0445b2166f1ea8f3107e" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:20:05.456965 systemd-networkd[1422]: calib9b22fedbcd: Link UP Nov 6 00:20:05.459515 systemd-networkd[1422]: calib9b22fedbcd: Gained carrier Nov 6 00:20:05.498092 containerd[1494]: 2025-11-06 00:20:05.134 [INFO][4131] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--46450dc2d5-k8s-goldmane--7c778bb748--fx2xl-eth0 goldmane-7c778bb748- calico-system d092cd15-7a3f-47f6-bde9-78a01defdd36 835 0 2025-11-06 00:19:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.1.0-n-46450dc2d5 goldmane-7c778bb748-fx2xl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib9b22fedbcd [] [] }} ContainerID="e149a9f739deaaf38d8645ec48986f16c9c5e73763de2f52d95e583bfd120999" Namespace="calico-system" Pod="goldmane-7c778bb748-fx2xl" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-goldmane--7c778bb748--fx2xl-" Nov 6 00:20:05.498092 containerd[1494]: 2025-11-06 00:20:05.134 [INFO][4131] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e149a9f739deaaf38d8645ec48986f16c9c5e73763de2f52d95e583bfd120999" Namespace="calico-system" Pod="goldmane-7c778bb748-fx2xl" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-goldmane--7c778bb748--fx2xl-eth0" Nov 6 00:20:05.498092 containerd[1494]: 2025-11-06 00:20:05.264 [INFO][4150] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e149a9f739deaaf38d8645ec48986f16c9c5e73763de2f52d95e583bfd120999" HandleID="k8s-pod-network.e149a9f739deaaf38d8645ec48986f16c9c5e73763de2f52d95e583bfd120999" Workload="ci--4459.1.0--n--46450dc2d5-k8s-goldmane--7c778bb748--fx2xl-eth0" Nov 6 00:20:05.498666 containerd[1494]: 2025-11-06 00:20:05.264 [INFO][4150] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e149a9f739deaaf38d8645ec48986f16c9c5e73763de2f52d95e583bfd120999" HandleID="k8s-pod-network.e149a9f739deaaf38d8645ec48986f16c9c5e73763de2f52d95e583bfd120999" Workload="ci--4459.1.0--n--46450dc2d5-k8s-goldmane--7c778bb748--fx2xl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000354f20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-n-46450dc2d5", "pod":"goldmane-7c778bb748-fx2xl", "timestamp":"2025-11-06 00:20:05.264082229 +0000 UTC"}, Hostname:"ci-4459.1.0-n-46450dc2d5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:20:05.498666 containerd[1494]: 2025-11-06 00:20:05.264 [INFO][4150] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:20:05.498666 containerd[1494]: 2025-11-06 00:20:05.308 [INFO][4150] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:20:05.498666 containerd[1494]: 2025-11-06 00:20:05.309 [INFO][4150] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-46450dc2d5' Nov 6 00:20:05.498666 containerd[1494]: 2025-11-06 00:20:05.356 [INFO][4150] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e149a9f739deaaf38d8645ec48986f16c9c5e73763de2f52d95e583bfd120999" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:05.498666 containerd[1494]: 2025-11-06 00:20:05.371 [INFO][4150] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:05.498666 containerd[1494]: 2025-11-06 00:20:05.388 [INFO][4150] ipam/ipam.go 511: Trying affinity for 192.168.71.192/26 host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:05.498666 containerd[1494]: 2025-11-06 00:20:05.403 [INFO][4150] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.192/26 host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:05.498666 containerd[1494]: 2025-11-06 00:20:05.408 [INFO][4150] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.192/26 host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:05.498913 containerd[1494]: 2025-11-06 00:20:05.408 [INFO][4150] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.192/26 handle="k8s-pod-network.e149a9f739deaaf38d8645ec48986f16c9c5e73763de2f52d95e583bfd120999" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:05.498913 containerd[1494]: 2025-11-06 00:20:05.415 [INFO][4150] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e149a9f739deaaf38d8645ec48986f16c9c5e73763de2f52d95e583bfd120999 Nov 6 00:20:05.498913 containerd[1494]: 2025-11-06 00:20:05.425 [INFO][4150] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.192/26 handle="k8s-pod-network.e149a9f739deaaf38d8645ec48986f16c9c5e73763de2f52d95e583bfd120999" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:05.498913 containerd[1494]: 2025-11-06 00:20:05.440 [INFO][4150] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.195/26] block=192.168.71.192/26 handle="k8s-pod-network.e149a9f739deaaf38d8645ec48986f16c9c5e73763de2f52d95e583bfd120999" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:05.498913 containerd[1494]: 2025-11-06 00:20:05.441 [INFO][4150] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.195/26] handle="k8s-pod-network.e149a9f739deaaf38d8645ec48986f16c9c5e73763de2f52d95e583bfd120999" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:05.498913 containerd[1494]: 2025-11-06 00:20:05.441 [INFO][4150] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:20:05.498913 containerd[1494]: 2025-11-06 00:20:05.441 [INFO][4150] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.195/26] IPv6=[] ContainerID="e149a9f739deaaf38d8645ec48986f16c9c5e73763de2f52d95e583bfd120999" HandleID="k8s-pod-network.e149a9f739deaaf38d8645ec48986f16c9c5e73763de2f52d95e583bfd120999" Workload="ci--4459.1.0--n--46450dc2d5-k8s-goldmane--7c778bb748--fx2xl-eth0" Nov 6 00:20:05.499461 containerd[1494]: 2025-11-06 00:20:05.450 [INFO][4131] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e149a9f739deaaf38d8645ec48986f16c9c5e73763de2f52d95e583bfd120999" Namespace="calico-system" Pod="goldmane-7c778bb748-fx2xl" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-goldmane--7c778bb748--fx2xl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--46450dc2d5-k8s-goldmane--7c778bb748--fx2xl-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"d092cd15-7a3f-47f6-bde9-78a01defdd36", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 19, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-46450dc2d5", ContainerID:"", Pod:"goldmane-7c778bb748-fx2xl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.71.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib9b22fedbcd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:20:05.499606 containerd[1494]: 2025-11-06 00:20:05.451 [INFO][4131] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.195/32] ContainerID="e149a9f739deaaf38d8645ec48986f16c9c5e73763de2f52d95e583bfd120999" Namespace="calico-system" Pod="goldmane-7c778bb748-fx2xl" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-goldmane--7c778bb748--fx2xl-eth0" Nov 6 00:20:05.499606 containerd[1494]: 2025-11-06 00:20:05.451 [INFO][4131] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9b22fedbcd ContainerID="e149a9f739deaaf38d8645ec48986f16c9c5e73763de2f52d95e583bfd120999" Namespace="calico-system" Pod="goldmane-7c778bb748-fx2xl" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-goldmane--7c778bb748--fx2xl-eth0" Nov 6 00:20:05.499606 containerd[1494]: 2025-11-06 00:20:05.459 [INFO][4131] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e149a9f739deaaf38d8645ec48986f16c9c5e73763de2f52d95e583bfd120999" Namespace="calico-system" Pod="goldmane-7c778bb748-fx2xl" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-goldmane--7c778bb748--fx2xl-eth0" Nov 6 00:20:05.499721 containerd[1494]: 2025-11-06 00:20:05.461 [INFO][4131] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e149a9f739deaaf38d8645ec48986f16c9c5e73763de2f52d95e583bfd120999" Namespace="calico-system" Pod="goldmane-7c778bb748-fx2xl" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-goldmane--7c778bb748--fx2xl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--46450dc2d5-k8s-goldmane--7c778bb748--fx2xl-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"d092cd15-7a3f-47f6-bde9-78a01defdd36", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 19, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-46450dc2d5", ContainerID:"e149a9f739deaaf38d8645ec48986f16c9c5e73763de2f52d95e583bfd120999", Pod:"goldmane-7c778bb748-fx2xl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.71.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib9b22fedbcd", MAC:"fa:f8:84:45:83:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:20:05.499796 containerd[1494]: 2025-11-06 00:20:05.487 [INFO][4131] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e149a9f739deaaf38d8645ec48986f16c9c5e73763de2f52d95e583bfd120999" Namespace="calico-system" Pod="goldmane-7c778bb748-fx2xl" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-goldmane--7c778bb748--fx2xl-eth0" Nov 6 00:20:05.515133 systemd[1]: Started cri-containerd-05136469dda2f6fc819e244cff2068f15a7f0a554a79d1774d89724a3d222665.scope - libcontainer container 05136469dda2f6fc819e244cff2068f15a7f0a554a79d1774d89724a3d222665. Nov 6 00:20:05.563656 containerd[1494]: time="2025-11-06T00:20:05.563592408Z" level=info msg="connecting to shim e149a9f739deaaf38d8645ec48986f16c9c5e73763de2f52d95e583bfd120999" address="unix:///run/containerd/s/e5655b971e861a8b815ba2db3324c0cf1e0e754f81fb7689ef790d1787480723" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:20:05.591099 systemd-networkd[1422]: vxlan.calico: Gained IPv6LL Nov 6 00:20:05.640949 systemd[1]: Started cri-containerd-e149a9f739deaaf38d8645ec48986f16c9c5e73763de2f52d95e583bfd120999.scope - libcontainer container e149a9f739deaaf38d8645ec48986f16c9c5e73763de2f52d95e583bfd120999. Nov 6 00:20:05.646492 containerd[1494]: time="2025-11-06T00:20:05.646219845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5765b6cb-hzncn,Uid:22d06117-b04a-43e9-87e1-aa14b7fcef4f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"05136469dda2f6fc819e244cff2068f15a7f0a554a79d1774d89724a3d222665\"" Nov 6 00:20:05.651660 containerd[1494]: time="2025-11-06T00:20:05.651514906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:20:05.711549 containerd[1494]: time="2025-11-06T00:20:05.711421336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-fx2xl,Uid:d092cd15-7a3f-47f6-bde9-78a01defdd36,Namespace:calico-system,Attempt:0,} returns sandbox id \"e149a9f739deaaf38d8645ec48986f16c9c5e73763de2f52d95e583bfd120999\"" Nov 6 00:20:05.980279 containerd[1494]: time="2025-11-06T00:20:05.980198778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fsxfk,Uid:14475adc-4ac3-4f9b-9293-bb510ff52d31,Namespace:calico-system,Attempt:0,}" Nov 6 00:20:05.982098 containerd[1494]: time="2025-11-06T00:20:05.982031853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8495cfffbb-59fst,Uid:fa727a0e-c8e3-4851-8c43-fa33e679ce52,Namespace:calico-system,Attempt:0,}" Nov 6 00:20:05.982988 kubelet[2686]: E1106 00:20:05.982948 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:20:05.984654 containerd[1494]: time="2025-11-06T00:20:05.984522234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nf9q8,Uid:a63c9258-20b6-4f02-98b1-7ffadf516e5e,Namespace:kube-system,Attempt:0,}" Nov 6 00:20:06.011053 containerd[1494]: time="2025-11-06T00:20:06.010932729Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:20:06.015178 containerd[1494]: time="2025-11-06T00:20:06.015075683Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:20:06.017025 containerd[1494]: time="2025-11-06T00:20:06.016091590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:20:06.017789 kubelet[2686]: E1106 00:20:06.017684 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:20:06.017789 kubelet[2686]: E1106 00:20:06.017753 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:20:06.019855 kubelet[2686]: E1106 00:20:06.018006 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5765b6cb-hzncn_calico-apiserver(22d06117-b04a-43e9-87e1-aa14b7fcef4f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:20:06.020835 kubelet[2686]: E1106 00:20:06.020720 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5765b6cb-hzncn" podUID="22d06117-b04a-43e9-87e1-aa14b7fcef4f" Nov 6 00:20:06.033167 containerd[1494]: time="2025-11-06T00:20:06.031947568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:20:06.312867 systemd-networkd[1422]: cali260681c225b: Link UP Nov 6 00:20:06.315350 systemd-networkd[1422]: cali260681c225b: Gained carrier Nov 6 00:20:06.350303 containerd[1494]: 2025-11-06 00:20:06.110 [INFO][4272] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--46450dc2d5-k8s-calico--kube--controllers--8495cfffbb--59fst-eth0 calico-kube-controllers-8495cfffbb- calico-system fa727a0e-c8e3-4851-8c43-fa33e679ce52 838 0 2025-11-06 00:19:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8495cfffbb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.1.0-n-46450dc2d5 calico-kube-controllers-8495cfffbb-59fst eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali260681c225b [] [] }} ContainerID="e196c56be1fd4cd81e11e0a76bd9645952f3ad82747d870a2561552719a47a4e" Namespace="calico-system" Pod="calico-kube-controllers-8495cfffbb-59fst" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-calico--kube--controllers--8495cfffbb--59fst-" Nov 6 00:20:06.350303 containerd[1494]: 2025-11-06 00:20:06.111 [INFO][4272] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e196c56be1fd4cd81e11e0a76bd9645952f3ad82747d870a2561552719a47a4e" Namespace="calico-system" Pod="calico-kube-controllers-8495cfffbb-59fst" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-calico--kube--controllers--8495cfffbb--59fst-eth0" Nov 6 00:20:06.350303 containerd[1494]: 2025-11-06 00:20:06.209 [INFO][4304] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e196c56be1fd4cd81e11e0a76bd9645952f3ad82747d870a2561552719a47a4e" HandleID="k8s-pod-network.e196c56be1fd4cd81e11e0a76bd9645952f3ad82747d870a2561552719a47a4e" Workload="ci--4459.1.0--n--46450dc2d5-k8s-calico--kube--controllers--8495cfffbb--59fst-eth0" Nov 6 00:20:06.350705 containerd[1494]: 2025-11-06 00:20:06.210 [INFO][4304] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e196c56be1fd4cd81e11e0a76bd9645952f3ad82747d870a2561552719a47a4e" HandleID="k8s-pod-network.e196c56be1fd4cd81e11e0a76bd9645952f3ad82747d870a2561552719a47a4e" Workload="ci--4459.1.0--n--46450dc2d5-k8s-calico--kube--controllers--8495cfffbb--59fst-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103ac0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-n-46450dc2d5", "pod":"calico-kube-controllers-8495cfffbb-59fst", "timestamp":"2025-11-06 00:20:06.209012735 +0000 UTC"}, Hostname:"ci-4459.1.0-n-46450dc2d5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:20:06.350705 containerd[1494]: 2025-11-06 00:20:06.212 [INFO][4304] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:20:06.350705 containerd[1494]: 2025-11-06 00:20:06.213 [INFO][4304] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:20:06.350705 containerd[1494]: 2025-11-06 00:20:06.213 [INFO][4304] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-46450dc2d5' Nov 6 00:20:06.350705 containerd[1494]: 2025-11-06 00:20:06.238 [INFO][4304] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e196c56be1fd4cd81e11e0a76bd9645952f3ad82747d870a2561552719a47a4e" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:06.350705 containerd[1494]: 2025-11-06 00:20:06.250 [INFO][4304] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:06.350705 containerd[1494]: 2025-11-06 00:20:06.263 [INFO][4304] ipam/ipam.go 511: Trying affinity for 192.168.71.192/26 host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:06.350705 containerd[1494]: 2025-11-06 00:20:06.271 [INFO][4304] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.192/26 host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:06.350705 containerd[1494]: 2025-11-06 00:20:06.275 [INFO][4304] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.192/26 host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:06.351132 containerd[1494]: 2025-11-06 00:20:06.275 [INFO][4304] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.192/26 handle="k8s-pod-network.e196c56be1fd4cd81e11e0a76bd9645952f3ad82747d870a2561552719a47a4e" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:06.351132 containerd[1494]: 2025-11-06 00:20:06.279 [INFO][4304] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e196c56be1fd4cd81e11e0a76bd9645952f3ad82747d870a2561552719a47a4e Nov 6 00:20:06.351132 containerd[1494]: 2025-11-06 00:20:06.286 [INFO][4304] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.192/26 handle="k8s-pod-network.e196c56be1fd4cd81e11e0a76bd9645952f3ad82747d870a2561552719a47a4e" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:06.351132 containerd[1494]: 2025-11-06 00:20:06.296 [INFO][4304] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.196/26] block=192.168.71.192/26 handle="k8s-pod-network.e196c56be1fd4cd81e11e0a76bd9645952f3ad82747d870a2561552719a47a4e" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:06.351132 containerd[1494]: 2025-11-06 00:20:06.296 [INFO][4304] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.196/26] handle="k8s-pod-network.e196c56be1fd4cd81e11e0a76bd9645952f3ad82747d870a2561552719a47a4e" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:06.351132 containerd[1494]: 2025-11-06 00:20:06.296 [INFO][4304] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:20:06.351132 containerd[1494]: 2025-11-06 00:20:06.297 [INFO][4304] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.196/26] IPv6=[] ContainerID="e196c56be1fd4cd81e11e0a76bd9645952f3ad82747d870a2561552719a47a4e" HandleID="k8s-pod-network.e196c56be1fd4cd81e11e0a76bd9645952f3ad82747d870a2561552719a47a4e" Workload="ci--4459.1.0--n--46450dc2d5-k8s-calico--kube--controllers--8495cfffbb--59fst-eth0" Nov 6 00:20:06.352629 containerd[1494]: 2025-11-06 00:20:06.303 [INFO][4272] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e196c56be1fd4cd81e11e0a76bd9645952f3ad82747d870a2561552719a47a4e" Namespace="calico-system" Pod="calico-kube-controllers-8495cfffbb-59fst" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-calico--kube--controllers--8495cfffbb--59fst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--46450dc2d5-k8s-calico--kube--controllers--8495cfffbb--59fst-eth0", GenerateName:"calico-kube-controllers-8495cfffbb-", Namespace:"calico-system", SelfLink:"", UID:"fa727a0e-c8e3-4851-8c43-fa33e679ce52", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 19, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8495cfffbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-46450dc2d5", ContainerID:"", Pod:"calico-kube-controllers-8495cfffbb-59fst", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.71.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali260681c225b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:20:06.352736 containerd[1494]: 2025-11-06 00:20:06.303 [INFO][4272] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.196/32] ContainerID="e196c56be1fd4cd81e11e0a76bd9645952f3ad82747d870a2561552719a47a4e" Namespace="calico-system" Pod="calico-kube-controllers-8495cfffbb-59fst" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-calico--kube--controllers--8495cfffbb--59fst-eth0" Nov 6 00:20:06.352736 containerd[1494]: 2025-11-06 00:20:06.304 [INFO][4272] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali260681c225b ContainerID="e196c56be1fd4cd81e11e0a76bd9645952f3ad82747d870a2561552719a47a4e" Namespace="calico-system" Pod="calico-kube-controllers-8495cfffbb-59fst" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-calico--kube--controllers--8495cfffbb--59fst-eth0" Nov 6 00:20:06.352736 containerd[1494]: 2025-11-06 00:20:06.316 [INFO][4272] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e196c56be1fd4cd81e11e0a76bd9645952f3ad82747d870a2561552719a47a4e" Namespace="calico-system" Pod="calico-kube-controllers-8495cfffbb-59fst" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-calico--kube--controllers--8495cfffbb--59fst-eth0" Nov 6 00:20:06.352813 containerd[1494]: 2025-11-06 00:20:06.317 [INFO][4272] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e196c56be1fd4cd81e11e0a76bd9645952f3ad82747d870a2561552719a47a4e" Namespace="calico-system" Pod="calico-kube-controllers-8495cfffbb-59fst" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-calico--kube--controllers--8495cfffbb--59fst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--46450dc2d5-k8s-calico--kube--controllers--8495cfffbb--59fst-eth0", GenerateName:"calico-kube-controllers-8495cfffbb-", Namespace:"calico-system", SelfLink:"", UID:"fa727a0e-c8e3-4851-8c43-fa33e679ce52", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 19, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8495cfffbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-46450dc2d5", ContainerID:"e196c56be1fd4cd81e11e0a76bd9645952f3ad82747d870a2561552719a47a4e", Pod:"calico-kube-controllers-8495cfffbb-59fst", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.71.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali260681c225b", MAC:"f6:e6:89:97:16:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:20:06.352883 containerd[1494]: 2025-11-06 00:20:06.337 [INFO][4272] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e196c56be1fd4cd81e11e0a76bd9645952f3ad82747d870a2561552719a47a4e" Namespace="calico-system" Pod="calico-kube-controllers-8495cfffbb-59fst" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-calico--kube--controllers--8495cfffbb--59fst-eth0" Nov 6 00:20:06.392295 containerd[1494]: time="2025-11-06T00:20:06.392229301Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:20:06.403030 containerd[1494]: time="2025-11-06T00:20:06.401536667Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:20:06.403030 containerd[1494]: time="2025-11-06T00:20:06.401755207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:20:06.403258 kubelet[2686]: E1106 00:20:06.402017 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:20:06.403258 kubelet[2686]: E1106 00:20:06.402446 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:20:06.404459 kubelet[2686]: E1106 00:20:06.403447 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-fx2xl_calico-system(d092cd15-7a3f-47f6-bde9-78a01defdd36): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:20:06.404459 kubelet[2686]: E1106 00:20:06.403830 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-fx2xl" podUID="d092cd15-7a3f-47f6-bde9-78a01defdd36" Nov 6 00:20:06.416860 kubelet[2686]: E1106 00:20:06.416768 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-fx2xl" podUID="d092cd15-7a3f-47f6-bde9-78a01defdd36" Nov 6 00:20:06.420041 containerd[1494]: time="2025-11-06T00:20:06.419917150Z" level=info msg="connecting to shim e196c56be1fd4cd81e11e0a76bd9645952f3ad82747d870a2561552719a47a4e" address="unix:///run/containerd/s/1e5e95d64f2258832e8da4b0a91ce45a71c83151d686f9a0ce0ddcf781b24bec" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:20:06.427022 kubelet[2686]: E1106 00:20:06.426717 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5765b6cb-hzncn" podUID="22d06117-b04a-43e9-87e1-aa14b7fcef4f" Nov 6 00:20:06.487157 systemd-networkd[1422]: calib9b22fedbcd: Gained IPv6LL Nov 6 00:20:06.517850 systemd-networkd[1422]: caliaa86b3c552c: Link UP Nov 6 00:20:06.529790 systemd-networkd[1422]: caliaa86b3c552c: Gained carrier Nov 6 00:20:06.548765 systemd[1]: Started cri-containerd-e196c56be1fd4cd81e11e0a76bd9645952f3ad82747d870a2561552719a47a4e.scope - libcontainer container e196c56be1fd4cd81e11e0a76bd9645952f3ad82747d870a2561552719a47a4e. Nov 6 00:20:06.551447 systemd-networkd[1422]: calibe144ee7f7b: Gained IPv6LL Nov 6 00:20:06.580057 containerd[1494]: 2025-11-06 00:20:06.182 [INFO][4281] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--46450dc2d5-k8s-coredns--66bc5c9577--nf9q8-eth0 coredns-66bc5c9577- kube-system a63c9258-20b6-4f02-98b1-7ffadf516e5e 836 0 2025-11-06 00:19:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.1.0-n-46450dc2d5 coredns-66bc5c9577-nf9q8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaa86b3c552c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="1a7294c081963c4324f4d32d72e940197391738a5013742f21da683fdb7b97da" Namespace="kube-system" Pod="coredns-66bc5c9577-nf9q8" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-coredns--66bc5c9577--nf9q8-" Nov 6 00:20:06.580057 containerd[1494]: 2025-11-06 00:20:06.185 [INFO][4281] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a7294c081963c4324f4d32d72e940197391738a5013742f21da683fdb7b97da" Namespace="kube-system" Pod="coredns-66bc5c9577-nf9q8" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-coredns--66bc5c9577--nf9q8-eth0" Nov 6 00:20:06.580057 containerd[1494]: 2025-11-06 00:20:06.271 [INFO][4316] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a7294c081963c4324f4d32d72e940197391738a5013742f21da683fdb7b97da" HandleID="k8s-pod-network.1a7294c081963c4324f4d32d72e940197391738a5013742f21da683fdb7b97da" Workload="ci--4459.1.0--n--46450dc2d5-k8s-coredns--66bc5c9577--nf9q8-eth0" Nov 6 00:20:06.580386 containerd[1494]: 2025-11-06 00:20:06.273 [INFO][4316] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1a7294c081963c4324f4d32d72e940197391738a5013742f21da683fdb7b97da" HandleID="k8s-pod-network.1a7294c081963c4324f4d32d72e940197391738a5013742f21da683fdb7b97da" Workload="ci--4459.1.0--n--46450dc2d5-k8s-coredns--66bc5c9577--nf9q8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123a50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.1.0-n-46450dc2d5", "pod":"coredns-66bc5c9577-nf9q8", "timestamp":"2025-11-06 00:20:06.271779868 +0000 UTC"}, Hostname:"ci-4459.1.0-n-46450dc2d5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:20:06.580386 containerd[1494]: 2025-11-06 00:20:06.273 [INFO][4316] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:20:06.580386 containerd[1494]: 2025-11-06 00:20:06.297 [INFO][4316] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:20:06.580386 containerd[1494]: 2025-11-06 00:20:06.297 [INFO][4316] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-46450dc2d5' Nov 6 00:20:06.580386 containerd[1494]: 2025-11-06 00:20:06.336 [INFO][4316] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a7294c081963c4324f4d32d72e940197391738a5013742f21da683fdb7b97da" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:06.580386 containerd[1494]: 2025-11-06 00:20:06.356 [INFO][4316] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:06.580386 containerd[1494]: 2025-11-06 00:20:06.368 [INFO][4316] ipam/ipam.go 511: Trying affinity for 192.168.71.192/26 host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:06.580386 containerd[1494]: 2025-11-06 00:20:06.372 [INFO][4316] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.192/26 host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:06.580386 containerd[1494]: 2025-11-06 00:20:06.379 [INFO][4316] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.192/26 host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:06.582221 containerd[1494]: 2025-11-06 00:20:06.379 [INFO][4316] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.192/26 handle="k8s-pod-network.1a7294c081963c4324f4d32d72e940197391738a5013742f21da683fdb7b97da" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:06.582221 containerd[1494]: 2025-11-06 00:20:06.383 [INFO][4316] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1a7294c081963c4324f4d32d72e940197391738a5013742f21da683fdb7b97da Nov 6 00:20:06.582221 containerd[1494]: 2025-11-06 00:20:06.399 [INFO][4316] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.192/26 handle="k8s-pod-network.1a7294c081963c4324f4d32d72e940197391738a5013742f21da683fdb7b97da" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:06.582221 containerd[1494]: 2025-11-06 00:20:06.422 [INFO][4316] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.197/26] block=192.168.71.192/26 handle="k8s-pod-network.1a7294c081963c4324f4d32d72e940197391738a5013742f21da683fdb7b97da" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:06.582221 containerd[1494]: 2025-11-06 00:20:06.423 [INFO][4316] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.197/26] handle="k8s-pod-network.1a7294c081963c4324f4d32d72e940197391738a5013742f21da683fdb7b97da" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:06.582221 containerd[1494]: 2025-11-06 00:20:06.425 [INFO][4316] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:20:06.582221 containerd[1494]: 2025-11-06 00:20:06.427 [INFO][4316] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.197/26] IPv6=[] ContainerID="1a7294c081963c4324f4d32d72e940197391738a5013742f21da683fdb7b97da" HandleID="k8s-pod-network.1a7294c081963c4324f4d32d72e940197391738a5013742f21da683fdb7b97da" Workload="ci--4459.1.0--n--46450dc2d5-k8s-coredns--66bc5c9577--nf9q8-eth0" Nov 6 00:20:06.584515 containerd[1494]: 2025-11-06 00:20:06.448 [INFO][4281] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a7294c081963c4324f4d32d72e940197391738a5013742f21da683fdb7b97da" Namespace="kube-system" Pod="coredns-66bc5c9577-nf9q8" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-coredns--66bc5c9577--nf9q8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--46450dc2d5-k8s-coredns--66bc5c9577--nf9q8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a63c9258-20b6-4f02-98b1-7ffadf516e5e", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 19, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-46450dc2d5", ContainerID:"", Pod:"coredns-66bc5c9577-nf9q8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa86b3c552c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:20:06.584515 containerd[1494]: 2025-11-06 00:20:06.448 [INFO][4281] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.197/32] ContainerID="1a7294c081963c4324f4d32d72e940197391738a5013742f21da683fdb7b97da" Namespace="kube-system" Pod="coredns-66bc5c9577-nf9q8" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-coredns--66bc5c9577--nf9q8-eth0" Nov 6 00:20:06.584515 containerd[1494]: 2025-11-06 00:20:06.448 [INFO][4281] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa86b3c552c ContainerID="1a7294c081963c4324f4d32d72e940197391738a5013742f21da683fdb7b97da" Namespace="kube-system" Pod="coredns-66bc5c9577-nf9q8" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-coredns--66bc5c9577--nf9q8-eth0" Nov 6 00:20:06.584515 containerd[1494]: 2025-11-06 00:20:06.536 [INFO][4281] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a7294c081963c4324f4d32d72e940197391738a5013742f21da683fdb7b97da" Namespace="kube-system" Pod="coredns-66bc5c9577-nf9q8" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-coredns--66bc5c9577--nf9q8-eth0" Nov 6 00:20:06.584515 containerd[1494]: 2025-11-06 00:20:06.541 [INFO][4281] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a7294c081963c4324f4d32d72e940197391738a5013742f21da683fdb7b97da" Namespace="kube-system" Pod="coredns-66bc5c9577-nf9q8" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-coredns--66bc5c9577--nf9q8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--46450dc2d5-k8s-coredns--66bc5c9577--nf9q8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a63c9258-20b6-4f02-98b1-7ffadf516e5e", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 19, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-46450dc2d5", ContainerID:"1a7294c081963c4324f4d32d72e940197391738a5013742f21da683fdb7b97da", Pod:"coredns-66bc5c9577-nf9q8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa86b3c552c", MAC:"4a:b9:c7:cf:0f:f2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:20:06.585532 containerd[1494]: 2025-11-06 00:20:06.570 [INFO][4281] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a7294c081963c4324f4d32d72e940197391738a5013742f21da683fdb7b97da" Namespace="kube-system" Pod="coredns-66bc5c9577-nf9q8" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-coredns--66bc5c9577--nf9q8-eth0" Nov 6 00:20:06.658356 containerd[1494]: time="2025-11-06T00:20:06.658152087Z" level=info msg="connecting to shim 1a7294c081963c4324f4d32d72e940197391738a5013742f21da683fdb7b97da" address="unix:///run/containerd/s/12c31b5c0f2e2b1d072985c3be3cd011e93331a56312f379f86365019fe74d0c" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:20:06.708730 systemd-networkd[1422]: cali94711693eba: Link UP Nov 6 00:20:06.710416 systemd-networkd[1422]: cali94711693eba: Gained carrier Nov 6 00:20:06.764729 containerd[1494]: 2025-11-06 00:20:06.175 [INFO][4270] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--46450dc2d5-k8s-csi--node--driver--fsxfk-eth0 csi-node-driver- calico-system 14475adc-4ac3-4f9b-9293-bb510ff52d31 722 0 2025-11-06 00:19:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.1.0-n-46450dc2d5 csi-node-driver-fsxfk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali94711693eba [] [] }} ContainerID="872f9da965b767e7d95e5295aae32217fb64f95d1cc6cafd30b8909495b16923" Namespace="calico-system" Pod="csi-node-driver-fsxfk" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-csi--node--driver--fsxfk-" Nov 6 00:20:06.764729 containerd[1494]: 2025-11-06 00:20:06.179 [INFO][4270] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="872f9da965b767e7d95e5295aae32217fb64f95d1cc6cafd30b8909495b16923" Namespace="calico-system" Pod="csi-node-driver-fsxfk" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-csi--node--driver--fsxfk-eth0" Nov 6 00:20:06.764729 containerd[1494]: 2025-11-06 00:20:06.282 [INFO][4314] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="872f9da965b767e7d95e5295aae32217fb64f95d1cc6cafd30b8909495b16923" HandleID="k8s-pod-network.872f9da965b767e7d95e5295aae32217fb64f95d1cc6cafd30b8909495b16923" Workload="ci--4459.1.0--n--46450dc2d5-k8s-csi--node--driver--fsxfk-eth0" Nov 6 00:20:06.764729 containerd[1494]: 2025-11-06 00:20:06.282 [INFO][4314] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="872f9da965b767e7d95e5295aae32217fb64f95d1cc6cafd30b8909495b16923" HandleID="k8s-pod-network.872f9da965b767e7d95e5295aae32217fb64f95d1cc6cafd30b8909495b16923" Workload="ci--4459.1.0--n--46450dc2d5-k8s-csi--node--driver--fsxfk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103920), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-n-46450dc2d5", "pod":"csi-node-driver-fsxfk", "timestamp":"2025-11-06 00:20:06.282079614 +0000 UTC"}, Hostname:"ci-4459.1.0-n-46450dc2d5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:20:06.764729 containerd[1494]: 2025-11-06 00:20:06.282 [INFO][4314] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:20:06.764729 containerd[1494]: 2025-11-06 00:20:06.425 [INFO][4314] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:20:06.764729 containerd[1494]: 2025-11-06 00:20:06.427 [INFO][4314] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-46450dc2d5' Nov 6 00:20:06.764729 containerd[1494]: 2025-11-06 00:20:06.577 [INFO][4314] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.872f9da965b767e7d95e5295aae32217fb64f95d1cc6cafd30b8909495b16923" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:06.764729 containerd[1494]: 2025-11-06 00:20:06.594 [INFO][4314] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:06.764729 containerd[1494]: 2025-11-06 00:20:06.606 [INFO][4314] ipam/ipam.go 511: Trying affinity for 192.168.71.192/26 host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:06.764729 containerd[1494]: 2025-11-06 00:20:06.613 [INFO][4314] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.192/26 host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:06.764729 containerd[1494]: 2025-11-06 00:20:06.631 [INFO][4314] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.192/26 host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:06.764729 containerd[1494]: 2025-11-06 00:20:06.631 [INFO][4314] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.192/26 handle="k8s-pod-network.872f9da965b767e7d95e5295aae32217fb64f95d1cc6cafd30b8909495b16923" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:06.764729 containerd[1494]: 2025-11-06 00:20:06.640 [INFO][4314] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.872f9da965b767e7d95e5295aae32217fb64f95d1cc6cafd30b8909495b16923 Nov 6 00:20:06.764729 containerd[1494]: 2025-11-06 00:20:06.655 [INFO][4314] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.192/26 handle="k8s-pod-network.872f9da965b767e7d95e5295aae32217fb64f95d1cc6cafd30b8909495b16923" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:06.764729 containerd[1494]: 2025-11-06 00:20:06.674 [INFO][4314] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.198/26] block=192.168.71.192/26 handle="k8s-pod-network.872f9da965b767e7d95e5295aae32217fb64f95d1cc6cafd30b8909495b16923" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:06.764729 containerd[1494]: 2025-11-06 00:20:06.676 [INFO][4314] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.198/26] handle="k8s-pod-network.872f9da965b767e7d95e5295aae32217fb64f95d1cc6cafd30b8909495b16923" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:06.764729 containerd[1494]: 2025-11-06 00:20:06.676 [INFO][4314] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:20:06.764729 containerd[1494]: 2025-11-06 00:20:06.676 [INFO][4314] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.198/26] IPv6=[] ContainerID="872f9da965b767e7d95e5295aae32217fb64f95d1cc6cafd30b8909495b16923" HandleID="k8s-pod-network.872f9da965b767e7d95e5295aae32217fb64f95d1cc6cafd30b8909495b16923" Workload="ci--4459.1.0--n--46450dc2d5-k8s-csi--node--driver--fsxfk-eth0" Nov 6 00:20:06.769067 containerd[1494]: 2025-11-06 00:20:06.698 [INFO][4270] cni-plugin/k8s.go 418: Populated endpoint ContainerID="872f9da965b767e7d95e5295aae32217fb64f95d1cc6cafd30b8909495b16923" Namespace="calico-system" Pod="csi-node-driver-fsxfk" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-csi--node--driver--fsxfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--46450dc2d5-k8s-csi--node--driver--fsxfk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"14475adc-4ac3-4f9b-9293-bb510ff52d31", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 19, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-46450dc2d5", ContainerID:"", Pod:"csi-node-driver-fsxfk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.71.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali94711693eba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:20:06.769067 containerd[1494]: 2025-11-06 00:20:06.698 [INFO][4270] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.198/32] ContainerID="872f9da965b767e7d95e5295aae32217fb64f95d1cc6cafd30b8909495b16923" Namespace="calico-system" Pod="csi-node-driver-fsxfk" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-csi--node--driver--fsxfk-eth0" Nov 6 00:20:06.769067 containerd[1494]: 2025-11-06 00:20:06.698 [INFO][4270] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali94711693eba ContainerID="872f9da965b767e7d95e5295aae32217fb64f95d1cc6cafd30b8909495b16923" Namespace="calico-system" Pod="csi-node-driver-fsxfk" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-csi--node--driver--fsxfk-eth0" Nov 6 00:20:06.769067 containerd[1494]: 2025-11-06 00:20:06.711 [INFO][4270] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="872f9da965b767e7d95e5295aae32217fb64f95d1cc6cafd30b8909495b16923" Namespace="calico-system" Pod="csi-node-driver-fsxfk" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-csi--node--driver--fsxfk-eth0" Nov 6 00:20:06.769067 containerd[1494]: 2025-11-06 00:20:06.711 [INFO][4270] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="872f9da965b767e7d95e5295aae32217fb64f95d1cc6cafd30b8909495b16923" Namespace="calico-system" Pod="csi-node-driver-fsxfk" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-csi--node--driver--fsxfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--46450dc2d5-k8s-csi--node--driver--fsxfk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"14475adc-4ac3-4f9b-9293-bb510ff52d31", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 19, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-46450dc2d5", ContainerID:"872f9da965b767e7d95e5295aae32217fb64f95d1cc6cafd30b8909495b16923", Pod:"csi-node-driver-fsxfk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.71.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali94711693eba", MAC:"36:4d:fa:38:54:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:20:06.769067 containerd[1494]: 2025-11-06 00:20:06.749 [INFO][4270] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="872f9da965b767e7d95e5295aae32217fb64f95d1cc6cafd30b8909495b16923" Namespace="calico-system" Pod="csi-node-driver-fsxfk" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-csi--node--driver--fsxfk-eth0" Nov 6 00:20:06.808794 systemd[1]: Started cri-containerd-1a7294c081963c4324f4d32d72e940197391738a5013742f21da683fdb7b97da.scope - libcontainer container 1a7294c081963c4324f4d32d72e940197391738a5013742f21da683fdb7b97da. Nov 6 00:20:06.830314 containerd[1494]: time="2025-11-06T00:20:06.830142654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8495cfffbb-59fst,Uid:fa727a0e-c8e3-4851-8c43-fa33e679ce52,Namespace:calico-system,Attempt:0,} returns sandbox id \"e196c56be1fd4cd81e11e0a76bd9645952f3ad82747d870a2561552719a47a4e\"" Nov 6 00:20:06.835333 containerd[1494]: time="2025-11-06T00:20:06.835260682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:20:06.852735 containerd[1494]: time="2025-11-06T00:20:06.852659976Z" level=info msg="connecting to shim 872f9da965b767e7d95e5295aae32217fb64f95d1cc6cafd30b8909495b16923" address="unix:///run/containerd/s/dc74b7b8d3b6d10db5152c6a905aafe3e5fc6a9e2a01826c8b35f603f3bd05d0" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:20:06.903963 systemd[1]: Started cri-containerd-872f9da965b767e7d95e5295aae32217fb64f95d1cc6cafd30b8909495b16923.scope - libcontainer container 872f9da965b767e7d95e5295aae32217fb64f95d1cc6cafd30b8909495b16923. Nov 6 00:20:06.936258 containerd[1494]: time="2025-11-06T00:20:06.936180842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nf9q8,Uid:a63c9258-20b6-4f02-98b1-7ffadf516e5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a7294c081963c4324f4d32d72e940197391738a5013742f21da683fdb7b97da\"" Nov 6 00:20:06.938117 kubelet[2686]: E1106 00:20:06.938057 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:20:06.945326 containerd[1494]: time="2025-11-06T00:20:06.945269378Z" level=info msg="CreateContainer within sandbox \"1a7294c081963c4324f4d32d72e940197391738a5013742f21da683fdb7b97da\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:20:06.961059 containerd[1494]: time="2025-11-06T00:20:06.960973755Z" level=info msg="Container 19bbe9f72baaff7b8caa270e2e79b833c5d533524fb816f8d151d6c020606bbf: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:20:06.969093 containerd[1494]: time="2025-11-06T00:20:06.969046768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fsxfk,Uid:14475adc-4ac3-4f9b-9293-bb510ff52d31,Namespace:calico-system,Attempt:0,} returns sandbox id \"872f9da965b767e7d95e5295aae32217fb64f95d1cc6cafd30b8909495b16923\"" Nov 6 00:20:06.969937 containerd[1494]: time="2025-11-06T00:20:06.969884946Z" level=info msg="CreateContainer within sandbox \"1a7294c081963c4324f4d32d72e940197391738a5013742f21da683fdb7b97da\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"19bbe9f72baaff7b8caa270e2e79b833c5d533524fb816f8d151d6c020606bbf\"" Nov 6 00:20:06.971455 containerd[1494]: time="2025-11-06T00:20:06.971068181Z" level=info msg="StartContainer for \"19bbe9f72baaff7b8caa270e2e79b833c5d533524fb816f8d151d6c020606bbf\"" Nov 6 00:20:06.975277 containerd[1494]: time="2025-11-06T00:20:06.974453535Z" level=info msg="connecting to shim 19bbe9f72baaff7b8caa270e2e79b833c5d533524fb816f8d151d6c020606bbf" address="unix:///run/containerd/s/12c31b5c0f2e2b1d072985c3be3cd011e93331a56312f379f86365019fe74d0c" protocol=ttrpc version=3 Nov 6 00:20:07.015906 systemd[1]: Started cri-containerd-19bbe9f72baaff7b8caa270e2e79b833c5d533524fb816f8d151d6c020606bbf.scope - libcontainer container 19bbe9f72baaff7b8caa270e2e79b833c5d533524fb816f8d151d6c020606bbf. Nov 6 00:20:07.072216 containerd[1494]: time="2025-11-06T00:20:07.072161887Z" level=info msg="StartContainer for \"19bbe9f72baaff7b8caa270e2e79b833c5d533524fb816f8d151d6c020606bbf\" returns successfully" Nov 6 00:20:07.194635 containerd[1494]: time="2025-11-06T00:20:07.194544428Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:20:07.195629 containerd[1494]: time="2025-11-06T00:20:07.195506881Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:20:07.195734 containerd[1494]: time="2025-11-06T00:20:07.195653991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:20:07.196059 kubelet[2686]: E1106 00:20:07.195980 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:20:07.196059 kubelet[2686]: E1106 00:20:07.196037 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:20:07.197404 kubelet[2686]: E1106 00:20:07.196894 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-8495cfffbb-59fst_calico-system(fa727a0e-c8e3-4851-8c43-fa33e679ce52): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:20:07.197404 kubelet[2686]: E1106 00:20:07.196943 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8495cfffbb-59fst" podUID="fa727a0e-c8e3-4851-8c43-fa33e679ce52" Nov 6 00:20:07.198001 containerd[1494]: time="2025-11-06T00:20:07.197884374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:20:07.432676 kubelet[2686]: E1106 00:20:07.431976 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:20:07.438152 kubelet[2686]: E1106 00:20:07.438088 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8495cfffbb-59fst" podUID="fa727a0e-c8e3-4851-8c43-fa33e679ce52" Nov 6 00:20:07.440491 kubelet[2686]: E1106 00:20:07.440339 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5765b6cb-hzncn" podUID="22d06117-b04a-43e9-87e1-aa14b7fcef4f" Nov 6 00:20:07.440491 kubelet[2686]: E1106 00:20:07.440377 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-fx2xl" podUID="d092cd15-7a3f-47f6-bde9-78a01defdd36" Nov 6 00:20:07.492324 kubelet[2686]: I1106 00:20:07.492158 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nf9q8" podStartSLOduration=40.492137676 podStartE2EDuration="40.492137676s" podCreationTimestamp="2025-11-06 00:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:20:07.464223888 +0000 UTC m=+46.668083424" watchObservedRunningTime="2025-11-06 00:20:07.492137676 +0000 UTC m=+46.695997283" Nov 6 00:20:07.619004 containerd[1494]: time="2025-11-06T00:20:07.618934181Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:20:07.620415 containerd[1494]: time="2025-11-06T00:20:07.620293011Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:20:07.620589 containerd[1494]: time="2025-11-06T00:20:07.620444827Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:20:07.621020 kubelet[2686]: E1106 00:20:07.620936 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:20:07.621130 kubelet[2686]: E1106 00:20:07.621033 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:20:07.621548 kubelet[2686]: E1106 00:20:07.621235 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-fsxfk_calico-system(14475adc-4ac3-4f9b-9293-bb510ff52d31): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:20:07.623314 containerd[1494]: time="2025-11-06T00:20:07.623260297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:20:07.980045 kubelet[2686]: E1106 00:20:07.979879 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:20:07.980853 containerd[1494]: time="2025-11-06T00:20:07.980798963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jf6gr,Uid:e52578e9-6c1c-4ea4-bb10-32c4b4007c8c,Namespace:kube-system,Attempt:0,}" Nov 6 00:20:07.994883 containerd[1494]: time="2025-11-06T00:20:07.994607643Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:20:07.995725 containerd[1494]: time="2025-11-06T00:20:07.995605435Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:20:07.997706 containerd[1494]: time="2025-11-06T00:20:07.995656703Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:20:07.998183 kubelet[2686]: E1106 00:20:07.998139 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:20:07.998260 kubelet[2686]: E1106 00:20:07.998229 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:20:07.998441 kubelet[2686]: E1106 00:20:07.998389 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-fsxfk_calico-system(14475adc-4ac3-4f9b-9293-bb510ff52d31): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:20:07.999204 kubelet[2686]: E1106 00:20:07.998590 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fsxfk" podUID="14475adc-4ac3-4f9b-9293-bb510ff52d31" Nov 6 00:20:08.022788 systemd-networkd[1422]: cali260681c225b: Gained IPv6LL Nov 6 00:20:08.024270 systemd-networkd[1422]: cali94711693eba: Gained IPv6LL Nov 6 00:20:08.169710 systemd-networkd[1422]: caliddb4fb59464: Link UP Nov 6 00:20:08.171051 systemd-networkd[1422]: caliddb4fb59464: Gained carrier Nov 6 00:20:08.193760 containerd[1494]: 2025-11-06 00:20:08.059 [INFO][4528] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--46450dc2d5-k8s-coredns--66bc5c9577--jf6gr-eth0 coredns-66bc5c9577- kube-system e52578e9-6c1c-4ea4-bb10-32c4b4007c8c 829 0 2025-11-06 00:19:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.1.0-n-46450dc2d5 coredns-66bc5c9577-jf6gr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliddb4fb59464 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="6bf3b263ea8432d59cc8382c8bc23bedb18b1e3f051e3222cbd3d877be000915" Namespace="kube-system" Pod="coredns-66bc5c9577-jf6gr" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-coredns--66bc5c9577--jf6gr-" Nov 6 00:20:08.193760 containerd[1494]: 2025-11-06 00:20:08.059 [INFO][4528] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6bf3b263ea8432d59cc8382c8bc23bedb18b1e3f051e3222cbd3d877be000915" Namespace="kube-system" Pod="coredns-66bc5c9577-jf6gr" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-coredns--66bc5c9577--jf6gr-eth0" Nov 6 00:20:08.193760 containerd[1494]: 2025-11-06 00:20:08.102 [INFO][4539] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6bf3b263ea8432d59cc8382c8bc23bedb18b1e3f051e3222cbd3d877be000915" HandleID="k8s-pod-network.6bf3b263ea8432d59cc8382c8bc23bedb18b1e3f051e3222cbd3d877be000915" Workload="ci--4459.1.0--n--46450dc2d5-k8s-coredns--66bc5c9577--jf6gr-eth0" Nov 6 00:20:08.193760 containerd[1494]: 2025-11-06 00:20:08.102 [INFO][4539] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6bf3b263ea8432d59cc8382c8bc23bedb18b1e3f051e3222cbd3d877be000915" HandleID="k8s-pod-network.6bf3b263ea8432d59cc8382c8bc23bedb18b1e3f051e3222cbd3d877be000915" Workload="ci--4459.1.0--n--46450dc2d5-k8s-coredns--66bc5c9577--jf6gr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd020), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.1.0-n-46450dc2d5", "pod":"coredns-66bc5c9577-jf6gr", "timestamp":"2025-11-06 00:20:08.102419673 +0000 UTC"}, Hostname:"ci-4459.1.0-n-46450dc2d5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:20:08.193760 containerd[1494]: 2025-11-06 00:20:08.102 [INFO][4539] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:20:08.193760 containerd[1494]: 2025-11-06 00:20:08.102 [INFO][4539] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:20:08.193760 containerd[1494]: 2025-11-06 00:20:08.102 [INFO][4539] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-46450dc2d5' Nov 6 00:20:08.193760 containerd[1494]: 2025-11-06 00:20:08.112 [INFO][4539] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6bf3b263ea8432d59cc8382c8bc23bedb18b1e3f051e3222cbd3d877be000915" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:08.193760 containerd[1494]: 2025-11-06 00:20:08.120 [INFO][4539] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:08.193760 containerd[1494]: 2025-11-06 00:20:08.128 [INFO][4539] ipam/ipam.go 511: Trying affinity for 192.168.71.192/26 host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:08.193760 containerd[1494]: 2025-11-06 00:20:08.132 [INFO][4539] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.192/26 host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:08.193760 containerd[1494]: 2025-11-06 00:20:08.137 [INFO][4539] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.192/26 host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:08.193760 containerd[1494]: 2025-11-06 00:20:08.137 [INFO][4539] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.192/26 handle="k8s-pod-network.6bf3b263ea8432d59cc8382c8bc23bedb18b1e3f051e3222cbd3d877be000915" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:08.193760 containerd[1494]: 2025-11-06 00:20:08.140 [INFO][4539] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6bf3b263ea8432d59cc8382c8bc23bedb18b1e3f051e3222cbd3d877be000915 Nov 6 00:20:08.193760 containerd[1494]: 2025-11-06 00:20:08.146 [INFO][4539] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.192/26 handle="k8s-pod-network.6bf3b263ea8432d59cc8382c8bc23bedb18b1e3f051e3222cbd3d877be000915" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:08.193760 containerd[1494]: 2025-11-06 00:20:08.157 [INFO][4539] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.199/26] block=192.168.71.192/26 handle="k8s-pod-network.6bf3b263ea8432d59cc8382c8bc23bedb18b1e3f051e3222cbd3d877be000915" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:08.193760 containerd[1494]: 2025-11-06 00:20:08.157 [INFO][4539] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.199/26] handle="k8s-pod-network.6bf3b263ea8432d59cc8382c8bc23bedb18b1e3f051e3222cbd3d877be000915" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:08.193760 containerd[1494]: 2025-11-06 00:20:08.157 [INFO][4539] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:20:08.193760 containerd[1494]: 2025-11-06 00:20:08.157 [INFO][4539] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.199/26] IPv6=[] ContainerID="6bf3b263ea8432d59cc8382c8bc23bedb18b1e3f051e3222cbd3d877be000915" HandleID="k8s-pod-network.6bf3b263ea8432d59cc8382c8bc23bedb18b1e3f051e3222cbd3d877be000915" Workload="ci--4459.1.0--n--46450dc2d5-k8s-coredns--66bc5c9577--jf6gr-eth0" Nov 6 00:20:08.197935 containerd[1494]: 2025-11-06 00:20:08.161 [INFO][4528] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6bf3b263ea8432d59cc8382c8bc23bedb18b1e3f051e3222cbd3d877be000915" Namespace="kube-system" Pod="coredns-66bc5c9577-jf6gr" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-coredns--66bc5c9577--jf6gr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--46450dc2d5-k8s-coredns--66bc5c9577--jf6gr-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e52578e9-6c1c-4ea4-bb10-32c4b4007c8c", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 19, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-46450dc2d5", ContainerID:"", Pod:"coredns-66bc5c9577-jf6gr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliddb4fb59464", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:20:08.197935 containerd[1494]: 2025-11-06 00:20:08.161 [INFO][4528] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.199/32] ContainerID="6bf3b263ea8432d59cc8382c8bc23bedb18b1e3f051e3222cbd3d877be000915" Namespace="kube-system" Pod="coredns-66bc5c9577-jf6gr" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-coredns--66bc5c9577--jf6gr-eth0" Nov 6 00:20:08.197935 containerd[1494]: 2025-11-06 00:20:08.161 [INFO][4528] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliddb4fb59464 ContainerID="6bf3b263ea8432d59cc8382c8bc23bedb18b1e3f051e3222cbd3d877be000915" Namespace="kube-system" Pod="coredns-66bc5c9577-jf6gr" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-coredns--66bc5c9577--jf6gr-eth0" Nov 6 00:20:08.197935 containerd[1494]: 2025-11-06 00:20:08.172 [INFO][4528] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6bf3b263ea8432d59cc8382c8bc23bedb18b1e3f051e3222cbd3d877be000915" Namespace="kube-system" Pod="coredns-66bc5c9577-jf6gr" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-coredns--66bc5c9577--jf6gr-eth0" Nov 6 00:20:08.197935 containerd[1494]: 2025-11-06 00:20:08.172 [INFO][4528] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6bf3b263ea8432d59cc8382c8bc23bedb18b1e3f051e3222cbd3d877be000915" Namespace="kube-system" Pod="coredns-66bc5c9577-jf6gr" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-coredns--66bc5c9577--jf6gr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--46450dc2d5-k8s-coredns--66bc5c9577--jf6gr-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e52578e9-6c1c-4ea4-bb10-32c4b4007c8c", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 19, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-46450dc2d5", ContainerID:"6bf3b263ea8432d59cc8382c8bc23bedb18b1e3f051e3222cbd3d877be000915", Pod:"coredns-66bc5c9577-jf6gr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliddb4fb59464", MAC:"7a:da:76:d0:41:db", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:20:08.198370 containerd[1494]: 2025-11-06 00:20:08.185 [INFO][4528] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6bf3b263ea8432d59cc8382c8bc23bedb18b1e3f051e3222cbd3d877be000915" Namespace="kube-system" Pod="coredns-66bc5c9577-jf6gr" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-coredns--66bc5c9577--jf6gr-eth0" Nov 6 00:20:08.215963 systemd-networkd[1422]: caliaa86b3c552c: Gained IPv6LL Nov 6 00:20:08.249378 containerd[1494]: time="2025-11-06T00:20:08.248113493Z" level=info msg="connecting to shim 6bf3b263ea8432d59cc8382c8bc23bedb18b1e3f051e3222cbd3d877be000915" address="unix:///run/containerd/s/4a8d2226cc7e3c912a6215e6dc3c9ddc8eee595425eddc63db0d82d39e050233" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:20:08.309019 systemd[1]: Started cri-containerd-6bf3b263ea8432d59cc8382c8bc23bedb18b1e3f051e3222cbd3d877be000915.scope - libcontainer container 6bf3b263ea8432d59cc8382c8bc23bedb18b1e3f051e3222cbd3d877be000915. Nov 6 00:20:08.414230 containerd[1494]: time="2025-11-06T00:20:08.414081417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jf6gr,Uid:e52578e9-6c1c-4ea4-bb10-32c4b4007c8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bf3b263ea8432d59cc8382c8bc23bedb18b1e3f051e3222cbd3d877be000915\"" Nov 6 00:20:08.417894 kubelet[2686]: E1106 00:20:08.417838 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:20:08.427105 containerd[1494]: time="2025-11-06T00:20:08.426949421Z" level=info msg="CreateContainer within sandbox \"6bf3b263ea8432d59cc8382c8bc23bedb18b1e3f051e3222cbd3d877be000915\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:20:08.449816 containerd[1494]: time="2025-11-06T00:20:08.447036905Z" level=info msg="Container d3595b281175da9bacfad165f1582d656846b34f344872713d9aacab1d30cb74: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:20:08.456755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2908454401.mount: Deactivated successfully. Nov 6 00:20:08.460514 kubelet[2686]: E1106 00:20:08.459154 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:20:08.465187 kubelet[2686]: E1106 00:20:08.465135 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8495cfffbb-59fst" podUID="fa727a0e-c8e3-4851-8c43-fa33e679ce52" Nov 6 00:20:08.466592 kubelet[2686]: E1106 00:20:08.466435 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fsxfk" podUID="14475adc-4ac3-4f9b-9293-bb510ff52d31" Nov 6 00:20:08.474745 containerd[1494]: time="2025-11-06T00:20:08.474218441Z" level=info msg="CreateContainer within sandbox \"6bf3b263ea8432d59cc8382c8bc23bedb18b1e3f051e3222cbd3d877be000915\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d3595b281175da9bacfad165f1582d656846b34f344872713d9aacab1d30cb74\"" Nov 6 00:20:08.478094 containerd[1494]: time="2025-11-06T00:20:08.478033505Z" level=info msg="StartContainer for \"d3595b281175da9bacfad165f1582d656846b34f344872713d9aacab1d30cb74\"" Nov 6 00:20:08.481495 containerd[1494]: time="2025-11-06T00:20:08.481356855Z" level=info msg="connecting to shim d3595b281175da9bacfad165f1582d656846b34f344872713d9aacab1d30cb74" address="unix:///run/containerd/s/4a8d2226cc7e3c912a6215e6dc3c9ddc8eee595425eddc63db0d82d39e050233" protocol=ttrpc version=3 Nov 6 00:20:08.544954 systemd[1]: Started cri-containerd-d3595b281175da9bacfad165f1582d656846b34f344872713d9aacab1d30cb74.scope - libcontainer container d3595b281175da9bacfad165f1582d656846b34f344872713d9aacab1d30cb74. Nov 6 00:20:08.602374 containerd[1494]: time="2025-11-06T00:20:08.602330225Z" level=info msg="StartContainer for \"d3595b281175da9bacfad165f1582d656846b34f344872713d9aacab1d30cb74\" returns successfully" Nov 6 00:20:08.983507 containerd[1494]: time="2025-11-06T00:20:08.983411637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5765b6cb-8c67x,Uid:221557dd-55b6-4b8e-a63d-bc09352c8c41,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:20:09.158393 systemd-networkd[1422]: calib5daab0f6c1: Link UP Nov 6 00:20:09.159377 systemd-networkd[1422]: calib5daab0f6c1: Gained carrier Nov 6 00:20:09.195357 containerd[1494]: 2025-11-06 00:20:09.039 [INFO][4636] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--46450dc2d5-k8s-calico--apiserver--5765b6cb--8c67x-eth0 calico-apiserver-5765b6cb- calico-apiserver 221557dd-55b6-4b8e-a63d-bc09352c8c41 837 0 2025-11-06 00:19:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5765b6cb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.1.0-n-46450dc2d5 calico-apiserver-5765b6cb-8c67x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib5daab0f6c1 [] [] }} ContainerID="77fa8acabb404ea24538d8bc9ebc2f7f3d76c55ee859a483b1d11a85ba3cbc85" Namespace="calico-apiserver" Pod="calico-apiserver-5765b6cb-8c67x" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-calico--apiserver--5765b6cb--8c67x-" Nov 6 00:20:09.195357 containerd[1494]: 2025-11-06 00:20:09.042 [INFO][4636] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="77fa8acabb404ea24538d8bc9ebc2f7f3d76c55ee859a483b1d11a85ba3cbc85" Namespace="calico-apiserver" Pod="calico-apiserver-5765b6cb-8c67x" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-calico--apiserver--5765b6cb--8c67x-eth0" Nov 6 00:20:09.195357 containerd[1494]: 2025-11-06 00:20:09.091 [INFO][4647] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77fa8acabb404ea24538d8bc9ebc2f7f3d76c55ee859a483b1d11a85ba3cbc85" HandleID="k8s-pod-network.77fa8acabb404ea24538d8bc9ebc2f7f3d76c55ee859a483b1d11a85ba3cbc85" Workload="ci--4459.1.0--n--46450dc2d5-k8s-calico--apiserver--5765b6cb--8c67x-eth0" Nov 6 00:20:09.195357 containerd[1494]: 2025-11-06 00:20:09.091 [INFO][4647] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="77fa8acabb404ea24538d8bc9ebc2f7f3d76c55ee859a483b1d11a85ba3cbc85" HandleID="k8s-pod-network.77fa8acabb404ea24538d8bc9ebc2f7f3d76c55ee859a483b1d11a85ba3cbc85" Workload="ci--4459.1.0--n--46450dc2d5-k8s-calico--apiserver--5765b6cb--8c67x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.1.0-n-46450dc2d5", "pod":"calico-apiserver-5765b6cb-8c67x", "timestamp":"2025-11-06 00:20:09.091185654 +0000 UTC"}, Hostname:"ci-4459.1.0-n-46450dc2d5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:20:09.195357 containerd[1494]: 2025-11-06 00:20:09.091 [INFO][4647] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:20:09.195357 containerd[1494]: 2025-11-06 00:20:09.091 [INFO][4647] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:20:09.195357 containerd[1494]: 2025-11-06 00:20:09.091 [INFO][4647] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-46450dc2d5' Nov 6 00:20:09.195357 containerd[1494]: 2025-11-06 00:20:09.103 [INFO][4647] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.77fa8acabb404ea24538d8bc9ebc2f7f3d76c55ee859a483b1d11a85ba3cbc85" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:09.195357 containerd[1494]: 2025-11-06 00:20:09.112 [INFO][4647] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:09.195357 containerd[1494]: 2025-11-06 00:20:09.121 [INFO][4647] ipam/ipam.go 511: Trying affinity for 192.168.71.192/26 host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:09.195357 containerd[1494]: 2025-11-06 00:20:09.124 [INFO][4647] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.192/26 host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:09.195357 containerd[1494]: 2025-11-06 00:20:09.128 [INFO][4647] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.192/26 host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:09.195357 containerd[1494]: 2025-11-06 00:20:09.128 [INFO][4647] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.192/26 handle="k8s-pod-network.77fa8acabb404ea24538d8bc9ebc2f7f3d76c55ee859a483b1d11a85ba3cbc85" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:09.195357 containerd[1494]: 2025-11-06 00:20:09.132 [INFO][4647] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.77fa8acabb404ea24538d8bc9ebc2f7f3d76c55ee859a483b1d11a85ba3cbc85 Nov 6 00:20:09.195357 containerd[1494]: 2025-11-06 00:20:09.139 [INFO][4647] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.192/26 handle="k8s-pod-network.77fa8acabb404ea24538d8bc9ebc2f7f3d76c55ee859a483b1d11a85ba3cbc85" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:09.195357 containerd[1494]: 2025-11-06 00:20:09.147 [INFO][4647] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.200/26] block=192.168.71.192/26 handle="k8s-pod-network.77fa8acabb404ea24538d8bc9ebc2f7f3d76c55ee859a483b1d11a85ba3cbc85" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:09.195357 containerd[1494]: 2025-11-06 00:20:09.147 [INFO][4647] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.200/26] handle="k8s-pod-network.77fa8acabb404ea24538d8bc9ebc2f7f3d76c55ee859a483b1d11a85ba3cbc85" host="ci-4459.1.0-n-46450dc2d5" Nov 6 00:20:09.195357 containerd[1494]: 2025-11-06 00:20:09.147 [INFO][4647] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:20:09.195357 containerd[1494]: 2025-11-06 00:20:09.147 [INFO][4647] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.200/26] IPv6=[] ContainerID="77fa8acabb404ea24538d8bc9ebc2f7f3d76c55ee859a483b1d11a85ba3cbc85" HandleID="k8s-pod-network.77fa8acabb404ea24538d8bc9ebc2f7f3d76c55ee859a483b1d11a85ba3cbc85" Workload="ci--4459.1.0--n--46450dc2d5-k8s-calico--apiserver--5765b6cb--8c67x-eth0" Nov 6 00:20:09.196169 containerd[1494]: 2025-11-06 00:20:09.153 [INFO][4636] cni-plugin/k8s.go 418: Populated endpoint ContainerID="77fa8acabb404ea24538d8bc9ebc2f7f3d76c55ee859a483b1d11a85ba3cbc85" Namespace="calico-apiserver" Pod="calico-apiserver-5765b6cb-8c67x" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-calico--apiserver--5765b6cb--8c67x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--46450dc2d5-k8s-calico--apiserver--5765b6cb--8c67x-eth0", GenerateName:"calico-apiserver-5765b6cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"221557dd-55b6-4b8e-a63d-bc09352c8c41", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 19, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5765b6cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-46450dc2d5", ContainerID:"", Pod:"calico-apiserver-5765b6cb-8c67x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib5daab0f6c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:20:09.196169 containerd[1494]: 2025-11-06 00:20:09.153 [INFO][4636] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.200/32] ContainerID="77fa8acabb404ea24538d8bc9ebc2f7f3d76c55ee859a483b1d11a85ba3cbc85" Namespace="calico-apiserver" Pod="calico-apiserver-5765b6cb-8c67x" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-calico--apiserver--5765b6cb--8c67x-eth0" Nov 6 00:20:09.196169 containerd[1494]: 2025-11-06 00:20:09.153 [INFO][4636] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib5daab0f6c1 ContainerID="77fa8acabb404ea24538d8bc9ebc2f7f3d76c55ee859a483b1d11a85ba3cbc85" Namespace="calico-apiserver" Pod="calico-apiserver-5765b6cb-8c67x" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-calico--apiserver--5765b6cb--8c67x-eth0" Nov 6 00:20:09.196169 containerd[1494]: 2025-11-06 00:20:09.159 [INFO][4636] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77fa8acabb404ea24538d8bc9ebc2f7f3d76c55ee859a483b1d11a85ba3cbc85" Namespace="calico-apiserver" Pod="calico-apiserver-5765b6cb-8c67x" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-calico--apiserver--5765b6cb--8c67x-eth0" Nov 6 00:20:09.196169 containerd[1494]: 2025-11-06 00:20:09.161 [INFO][4636] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="77fa8acabb404ea24538d8bc9ebc2f7f3d76c55ee859a483b1d11a85ba3cbc85" Namespace="calico-apiserver" Pod="calico-apiserver-5765b6cb-8c67x" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-calico--apiserver--5765b6cb--8c67x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--46450dc2d5-k8s-calico--apiserver--5765b6cb--8c67x-eth0", GenerateName:"calico-apiserver-5765b6cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"221557dd-55b6-4b8e-a63d-bc09352c8c41", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 19, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5765b6cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-46450dc2d5", ContainerID:"77fa8acabb404ea24538d8bc9ebc2f7f3d76c55ee859a483b1d11a85ba3cbc85", Pod:"calico-apiserver-5765b6cb-8c67x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib5daab0f6c1", MAC:"9e:6b:1f:51:7f:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:20:09.196169 containerd[1494]: 2025-11-06 00:20:09.183 [INFO][4636] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="77fa8acabb404ea24538d8bc9ebc2f7f3d76c55ee859a483b1d11a85ba3cbc85" Namespace="calico-apiserver" Pod="calico-apiserver-5765b6cb-8c67x" WorkloadEndpoint="ci--4459.1.0--n--46450dc2d5-k8s-calico--apiserver--5765b6cb--8c67x-eth0" Nov 6 00:20:09.240726 containerd[1494]: time="2025-11-06T00:20:09.240513399Z" level=info msg="connecting to shim 77fa8acabb404ea24538d8bc9ebc2f7f3d76c55ee859a483b1d11a85ba3cbc85" address="unix:///run/containerd/s/8142660310f7060be26d620475f351c3048bd29e9a6e9fe6d16b553fb6bc2394" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:20:09.288861 systemd[1]: Started cri-containerd-77fa8acabb404ea24538d8bc9ebc2f7f3d76c55ee859a483b1d11a85ba3cbc85.scope - libcontainer container 77fa8acabb404ea24538d8bc9ebc2f7f3d76c55ee859a483b1d11a85ba3cbc85. Nov 6 00:20:09.367569 systemd-networkd[1422]: caliddb4fb59464: Gained IPv6LL Nov 6 00:20:09.399592 containerd[1494]: time="2025-11-06T00:20:09.399549073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5765b6cb-8c67x,Uid:221557dd-55b6-4b8e-a63d-bc09352c8c41,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"77fa8acabb404ea24538d8bc9ebc2f7f3d76c55ee859a483b1d11a85ba3cbc85\"" Nov 6 00:20:09.407734 containerd[1494]: time="2025-11-06T00:20:09.407622387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:20:09.466908 kubelet[2686]: E1106 00:20:09.466853 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:20:09.485654 kubelet[2686]: I1106 00:20:09.485510 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jf6gr" podStartSLOduration=42.48545328 podStartE2EDuration="42.48545328s" podCreationTimestamp="2025-11-06 00:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:20:09.485119186 +0000 UTC m=+48.688978719" watchObservedRunningTime="2025-11-06 00:20:09.48545328 +0000 UTC m=+48.689312817" Nov 6 00:20:09.752715 containerd[1494]: time="2025-11-06T00:20:09.752519923Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:20:09.753499 containerd[1494]: time="2025-11-06T00:20:09.753410934Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:20:09.753499 containerd[1494]: time="2025-11-06T00:20:09.753456385Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:20:09.753884 kubelet[2686]: E1106 00:20:09.753820 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:20:09.753974 kubelet[2686]: E1106 00:20:09.753890 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:20:09.754043 kubelet[2686]: E1106 00:20:09.753996 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5765b6cb-8c67x_calico-apiserver(221557dd-55b6-4b8e-a63d-bc09352c8c41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:20:09.754108 kubelet[2686]: E1106 00:20:09.754049 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5765b6cb-8c67x" podUID="221557dd-55b6-4b8e-a63d-bc09352c8c41" Nov 6 00:20:10.390686 systemd-networkd[1422]: calib5daab0f6c1: Gained IPv6LL Nov 6 00:20:10.475512 kubelet[2686]: E1106 00:20:10.475199 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:20:10.478352 kubelet[2686]: E1106 00:20:10.478309 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5765b6cb-8c67x" podUID="221557dd-55b6-4b8e-a63d-bc09352c8c41" Nov 6 00:20:11.480832 kubelet[2686]: E1106 00:20:11.480771 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:20:17.980426 containerd[1494]: time="2025-11-06T00:20:17.980213040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:20:18.293943 containerd[1494]: time="2025-11-06T00:20:18.293590113Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:20:18.294609 containerd[1494]: time="2025-11-06T00:20:18.294549921Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:20:18.294797 containerd[1494]: time="2025-11-06T00:20:18.294571959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:20:18.295552 kubelet[2686]: E1106 00:20:18.295506 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:20:18.296258 kubelet[2686]: E1106 00:20:18.295565 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:20:18.296258 kubelet[2686]: E1106 00:20:18.295955 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5765b6cb-hzncn_calico-apiserver(22d06117-b04a-43e9-87e1-aa14b7fcef4f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:20:18.296258 kubelet[2686]: E1106 00:20:18.296010 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5765b6cb-hzncn" podUID="22d06117-b04a-43e9-87e1-aa14b7fcef4f" Nov 6 00:20:18.297165 containerd[1494]: time="2025-11-06T00:20:18.297126630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:20:18.647495 containerd[1494]: time="2025-11-06T00:20:18.647254729Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:20:18.648484 containerd[1494]: time="2025-11-06T00:20:18.648407118Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:20:18.648790 containerd[1494]: time="2025-11-06T00:20:18.648559280Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:20:18.648865 kubelet[2686]: E1106 00:20:18.648733 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:20:18.648865 kubelet[2686]: E1106 00:20:18.648782 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:20:18.649080 kubelet[2686]: E1106 00:20:18.648887 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-66d8bc69bc-fhr2k_calico-system(cfa7aa38-883a-4fbf-a2b0-653ce5e79003): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:20:18.652097 containerd[1494]: time="2025-11-06T00:20:18.652059218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:20:18.974284 containerd[1494]: time="2025-11-06T00:20:18.974172129Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:20:18.975879 containerd[1494]: time="2025-11-06T00:20:18.975690518Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:20:18.975879 containerd[1494]: time="2025-11-06T00:20:18.975730977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:20:18.977482 kubelet[2686]: E1106 00:20:18.976576 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:20:18.977482 kubelet[2686]: E1106 00:20:18.976656 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:20:18.977482 kubelet[2686]: E1106 00:20:18.976778 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-66d8bc69bc-fhr2k_calico-system(cfa7aa38-883a-4fbf-a2b0-653ce5e79003): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:20:18.977783 kubelet[2686]: E1106 00:20:18.976837 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66d8bc69bc-fhr2k" podUID="cfa7aa38-883a-4fbf-a2b0-653ce5e79003" Nov 6 00:20:18.980309 containerd[1494]: time="2025-11-06T00:20:18.980260505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:20:19.322904 containerd[1494]: time="2025-11-06T00:20:19.322741298Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:20:19.323827 containerd[1494]: time="2025-11-06T00:20:19.323618765Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:20:19.323827 containerd[1494]: time="2025-11-06T00:20:19.323779552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:20:19.324065 kubelet[2686]: E1106 00:20:19.323981 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:20:19.324065 kubelet[2686]: E1106 00:20:19.324036 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:20:19.324778 kubelet[2686]: E1106 00:20:19.324121 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-8495cfffbb-59fst_calico-system(fa727a0e-c8e3-4851-8c43-fa33e679ce52): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:20:19.324778 kubelet[2686]: E1106 00:20:19.324159 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8495cfffbb-59fst" podUID="fa727a0e-c8e3-4851-8c43-fa33e679ce52" Nov 6 00:20:21.980099 containerd[1494]: time="2025-11-06T00:20:21.979923360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:20:22.317728 containerd[1494]: time="2025-11-06T00:20:22.317559422Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:20:22.318374 containerd[1494]: time="2025-11-06T00:20:22.318320713Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:20:22.318535 containerd[1494]: time="2025-11-06T00:20:22.318414997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:20:22.318764 kubelet[2686]: E1106 00:20:22.318647 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:20:22.319162 kubelet[2686]: E1106 00:20:22.318824 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:20:22.319798 kubelet[2686]: E1106 00:20:22.319566 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-fx2xl_calico-system(d092cd15-7a3f-47f6-bde9-78a01defdd36): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:20:22.320214 kubelet[2686]: E1106 00:20:22.319765 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-fx2xl" podUID="d092cd15-7a3f-47f6-bde9-78a01defdd36" Nov 6 00:20:22.320613 containerd[1494]: time="2025-11-06T00:20:22.320579976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:20:22.634735 containerd[1494]: time="2025-11-06T00:20:22.634582682Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:20:22.635219 containerd[1494]: time="2025-11-06T00:20:22.635170994Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:20:22.635605 containerd[1494]: time="2025-11-06T00:20:22.635274269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:20:22.635756 kubelet[2686]: E1106 00:20:22.635643 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:20:22.635756 kubelet[2686]: E1106 00:20:22.635699 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:20:22.636305 kubelet[2686]: E1106 00:20:22.635813 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5765b6cb-8c67x_calico-apiserver(221557dd-55b6-4b8e-a63d-bc09352c8c41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:20:22.636305 kubelet[2686]: E1106 00:20:22.635864 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5765b6cb-8c67x" podUID="221557dd-55b6-4b8e-a63d-bc09352c8c41" Nov 6 00:20:22.980223 containerd[1494]: time="2025-11-06T00:20:22.979751044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:20:23.362851 containerd[1494]: time="2025-11-06T00:20:23.362605276Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:20:23.363446 containerd[1494]: time="2025-11-06T00:20:23.363407509Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:20:23.364369 containerd[1494]: time="2025-11-06T00:20:23.363496153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:20:23.364433 kubelet[2686]: E1106 00:20:23.363739 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:20:23.364433 kubelet[2686]: E1106 00:20:23.363794 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:20:23.364433 kubelet[2686]: E1106 00:20:23.363903 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-fsxfk_calico-system(14475adc-4ac3-4f9b-9293-bb510ff52d31): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:20:23.366000 containerd[1494]: time="2025-11-06T00:20:23.365950495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:20:23.711727 containerd[1494]: time="2025-11-06T00:20:23.711668265Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:20:23.712589 containerd[1494]: time="2025-11-06T00:20:23.712516995Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:20:23.712687 containerd[1494]: time="2025-11-06T00:20:23.712561026Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:20:23.712875 kubelet[2686]: E1106 00:20:23.712833 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:20:23.712933 kubelet[2686]: E1106 00:20:23.712886 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:20:23.712982 kubelet[2686]: E1106 00:20:23.712965 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-fsxfk_calico-system(14475adc-4ac3-4f9b-9293-bb510ff52d31): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:20:23.713046 kubelet[2686]: E1106 00:20:23.713015 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fsxfk" podUID="14475adc-4ac3-4f9b-9293-bb510ff52d31" Nov 6 00:20:28.982711 kubelet[2686]: E1106 00:20:28.981615 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5765b6cb-hzncn" podUID="22d06117-b04a-43e9-87e1-aa14b7fcef4f" Nov 6 00:20:32.979952 kubelet[2686]: E1106 00:20:32.979737 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-fx2xl" podUID="d092cd15-7a3f-47f6-bde9-78a01defdd36" Nov 6 00:20:32.983217 kubelet[2686]: E1106 00:20:32.982389 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66d8bc69bc-fhr2k" podUID="cfa7aa38-883a-4fbf-a2b0-653ce5e79003" Nov 6 00:20:33.878727 systemd[1]: Started sshd@7-64.23.183.231:22-139.178.68.195:39666.service - OpenSSH per-connection server daemon (139.178.68.195:39666). Nov 6 00:20:33.980011 kubelet[2686]: E1106 00:20:33.979936 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5765b6cb-8c67x" podUID="221557dd-55b6-4b8e-a63d-bc09352c8c41" Nov 6 00:20:33.983315 kubelet[2686]: E1106 00:20:33.983228 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8495cfffbb-59fst" podUID="fa727a0e-c8e3-4851-8c43-fa33e679ce52" Nov 6 00:20:34.062028 sshd[4748]: Accepted publickey for core from 139.178.68.195 port 39666 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:20:34.064631 sshd-session[4748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:20:34.078876 systemd-logind[1474]: New session 8 of user core. Nov 6 00:20:34.080883 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 6 00:20:34.496127 containerd[1494]: time="2025-11-06T00:20:34.496075215Z" level=info msg="TaskExit event in podsandbox handler container_id:\"073e6cdb9eca86418c71a81787240824f8d9c13ee9c7a71a55dfe70893bcd73b\" id:\"683ccfa47b76972e4007735ef8d8e3326f8cc9f4b1ea4369528dddcb3add1b11\" pid:4767 exited_at:{seconds:1762388434 nanos:493443018}" Nov 6 00:20:34.508636 kubelet[2686]: E1106 00:20:34.507631 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:20:34.674500 sshd[4754]: Connection closed by 139.178.68.195 port 39666 Nov 6 00:20:34.675569 sshd-session[4748]: pam_unix(sshd:session): session closed for user core Nov 6 00:20:34.683013 systemd[1]: sshd@7-64.23.183.231:22-139.178.68.195:39666.service: Deactivated successfully. Nov 6 00:20:34.687417 systemd[1]: session-8.scope: Deactivated successfully. Nov 6 00:20:34.688869 systemd-logind[1474]: Session 8 logged out. Waiting for processes to exit. Nov 6 00:20:34.691167 systemd-logind[1474]: Removed session 8. Nov 6 00:20:34.978487 kubelet[2686]: E1106 00:20:34.978407 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:20:37.983495 kubelet[2686]: E1106 00:20:37.983083 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fsxfk" podUID="14475adc-4ac3-4f9b-9293-bb510ff52d31" Nov 6 00:20:39.693025 systemd[1]: Started sshd@8-64.23.183.231:22-139.178.68.195:39668.service - OpenSSH per-connection server daemon (139.178.68.195:39668). Nov 6 00:20:39.774623 sshd[4794]: Accepted publickey for core from 139.178.68.195 port 39668 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:20:39.775439 sshd-session[4794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:20:39.782153 systemd-logind[1474]: New session 9 of user core. Nov 6 00:20:39.791848 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 6 00:20:39.983908 sshd[4797]: Connection closed by 139.178.68.195 port 39668 Nov 6 00:20:39.984621 sshd-session[4794]: pam_unix(sshd:session): session closed for user core Nov 6 00:20:39.991009 systemd[1]: sshd@8-64.23.183.231:22-139.178.68.195:39668.service: Deactivated successfully. Nov 6 00:20:39.996301 systemd[1]: session-9.scope: Deactivated successfully. Nov 6 00:20:39.998304 systemd-logind[1474]: Session 9 logged out. Waiting for processes to exit. Nov 6 00:20:40.001919 systemd-logind[1474]: Removed session 9. Nov 6 00:20:42.980087 containerd[1494]: time="2025-11-06T00:20:42.980043484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:20:43.308263 containerd[1494]: time="2025-11-06T00:20:43.307387643Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:20:43.308761 containerd[1494]: time="2025-11-06T00:20:43.308631937Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:20:43.308761 containerd[1494]: time="2025-11-06T00:20:43.308742991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:20:43.309058 kubelet[2686]: E1106 00:20:43.308990 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:20:43.309595 kubelet[2686]: E1106 00:20:43.309144 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:20:43.309595 kubelet[2686]: E1106 00:20:43.309248 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5765b6cb-hzncn_calico-apiserver(22d06117-b04a-43e9-87e1-aa14b7fcef4f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:20:43.309746 kubelet[2686]: E1106 00:20:43.309701 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5765b6cb-hzncn" podUID="22d06117-b04a-43e9-87e1-aa14b7fcef4f" Nov 6 00:20:43.981282 containerd[1494]: time="2025-11-06T00:20:43.980736215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:20:44.321604 containerd[1494]: time="2025-11-06T00:20:44.321388953Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:20:44.324708 containerd[1494]: time="2025-11-06T00:20:44.324627324Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:20:44.324902 containerd[1494]: time="2025-11-06T00:20:44.324666646Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:20:44.325110 kubelet[2686]: E1106 00:20:44.325041 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:20:44.326765 kubelet[2686]: E1106 00:20:44.325130 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:20:44.326765 kubelet[2686]: E1106 00:20:44.325353 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-66d8bc69bc-fhr2k_calico-system(cfa7aa38-883a-4fbf-a2b0-653ce5e79003): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:20:44.327234 containerd[1494]: time="2025-11-06T00:20:44.326058146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:20:44.630826 containerd[1494]: time="2025-11-06T00:20:44.630656700Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:20:44.631825 containerd[1494]: time="2025-11-06T00:20:44.631441912Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:20:44.633633 containerd[1494]: time="2025-11-06T00:20:44.631609765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:20:44.633941 kubelet[2686]: E1106 00:20:44.633892 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:20:44.634271 kubelet[2686]: E1106 00:20:44.634051 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:20:44.634853 kubelet[2686]: E1106 00:20:44.634293 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-fx2xl_calico-system(d092cd15-7a3f-47f6-bde9-78a01defdd36): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:20:44.634853 kubelet[2686]: E1106 00:20:44.634574 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-fx2xl" podUID="d092cd15-7a3f-47f6-bde9-78a01defdd36" Nov 6 00:20:44.635015 containerd[1494]: time="2025-11-06T00:20:44.634625016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:20:44.978386 kubelet[2686]: E1106 00:20:44.977998 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:20:45.003765 systemd[1]: Started sshd@9-64.23.183.231:22-139.178.68.195:43030.service - OpenSSH per-connection server daemon (139.178.68.195:43030). Nov 6 00:20:45.075405 sshd[4816]: Accepted publickey for core from 139.178.68.195 port 43030 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:20:45.076967 sshd-session[4816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:20:45.088060 systemd-logind[1474]: New session 10 of user core. Nov 6 00:20:45.093799 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 6 00:20:45.136002 containerd[1494]: time="2025-11-06T00:20:45.135936498Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:20:45.137052 containerd[1494]: time="2025-11-06T00:20:45.136763867Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:20:45.137052 containerd[1494]: time="2025-11-06T00:20:45.136853893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:20:45.138344 kubelet[2686]: E1106 00:20:45.137014 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:20:45.138344 kubelet[2686]: E1106 00:20:45.137066 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:20:45.138344 kubelet[2686]: E1106 00:20:45.137153 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-66d8bc69bc-fhr2k_calico-system(cfa7aa38-883a-4fbf-a2b0-653ce5e79003): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:20:45.138558 kubelet[2686]: E1106 00:20:45.137192 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66d8bc69bc-fhr2k" podUID="cfa7aa38-883a-4fbf-a2b0-653ce5e79003" Nov 6 00:20:45.273691 sshd[4819]: Connection closed by 139.178.68.195 port 43030 Nov 6 00:20:45.274620 sshd-session[4816]: pam_unix(sshd:session): session closed for user core Nov 6 00:20:45.282115 systemd-logind[1474]: Session 10 logged out. Waiting for processes to exit. Nov 6 00:20:45.282876 systemd[1]: sshd@9-64.23.183.231:22-139.178.68.195:43030.service: Deactivated successfully. Nov 6 00:20:45.286032 systemd[1]: session-10.scope: Deactivated successfully. Nov 6 00:20:45.292650 systemd-logind[1474]: Removed session 10. Nov 6 00:20:47.980783 containerd[1494]: time="2025-11-06T00:20:47.980444187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:20:48.334875 containerd[1494]: time="2025-11-06T00:20:48.334309726Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:20:48.335905 containerd[1494]: time="2025-11-06T00:20:48.335838641Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:20:48.336042 containerd[1494]: time="2025-11-06T00:20:48.335868149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:20:48.336753 kubelet[2686]: E1106 00:20:48.336422 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:20:48.336753 kubelet[2686]: E1106 00:20:48.336526 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:20:48.338607 kubelet[2686]: E1106 00:20:48.337988 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-8495cfffbb-59fst_calico-system(fa727a0e-c8e3-4851-8c43-fa33e679ce52): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:20:48.338720 containerd[1494]: time="2025-11-06T00:20:48.337757979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:20:48.339525 kubelet[2686]: E1106 00:20:48.338904 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8495cfffbb-59fst" podUID="fa727a0e-c8e3-4851-8c43-fa33e679ce52" Nov 6 00:20:48.661589 containerd[1494]: time="2025-11-06T00:20:48.661420615Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:20:48.662482 containerd[1494]: time="2025-11-06T00:20:48.662411053Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:20:48.662593 containerd[1494]: time="2025-11-06T00:20:48.662546360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:20:48.662818 kubelet[2686]: E1106 00:20:48.662777 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:20:48.662899 kubelet[2686]: E1106 00:20:48.662835 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:20:48.662961 kubelet[2686]: E1106 00:20:48.662931 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5765b6cb-8c67x_calico-apiserver(221557dd-55b6-4b8e-a63d-bc09352c8c41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:20:48.663049 kubelet[2686]: E1106 00:20:48.662979 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5765b6cb-8c67x" podUID="221557dd-55b6-4b8e-a63d-bc09352c8c41" Nov 6 00:20:48.980872 containerd[1494]: time="2025-11-06T00:20:48.980771729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:20:49.314461 containerd[1494]: time="2025-11-06T00:20:49.314195855Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:20:49.315297 containerd[1494]: time="2025-11-06T00:20:49.315220771Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:20:49.315297 containerd[1494]: time="2025-11-06T00:20:49.315261393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:20:49.315983 kubelet[2686]: E1106 00:20:49.315854 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:20:49.315983 kubelet[2686]: E1106 00:20:49.315914 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:20:49.316450 kubelet[2686]: E1106 00:20:49.316378 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-fsxfk_calico-system(14475adc-4ac3-4f9b-9293-bb510ff52d31): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:20:49.318110 containerd[1494]: time="2025-11-06T00:20:49.318038368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:20:49.664227 containerd[1494]: time="2025-11-06T00:20:49.663869780Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:20:49.665267 containerd[1494]: time="2025-11-06T00:20:49.665132232Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:20:49.665267 containerd[1494]: time="2025-11-06T00:20:49.665195387Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:20:49.666370 kubelet[2686]: E1106 00:20:49.665669 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:20:49.666370 kubelet[2686]: E1106 00:20:49.665727 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:20:49.666370 kubelet[2686]: E1106 00:20:49.665813 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-fsxfk_calico-system(14475adc-4ac3-4f9b-9293-bb510ff52d31): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:20:49.667125 kubelet[2686]: E1106 00:20:49.665866 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fsxfk" podUID="14475adc-4ac3-4f9b-9293-bb510ff52d31" Nov 6 00:20:50.290233 systemd[1]: Started sshd@10-64.23.183.231:22-139.178.68.195:43044.service - OpenSSH per-connection server daemon (139.178.68.195:43044). Nov 6 00:20:50.443503 sshd[4834]: Accepted publickey for core from 139.178.68.195 port 43044 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:20:50.446230 sshd-session[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:20:50.454587 systemd-logind[1474]: New session 11 of user core. Nov 6 00:20:50.460617 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 6 00:20:50.672579 sshd[4837]: Connection closed by 139.178.68.195 port 43044 Nov 6 00:20:50.673279 sshd-session[4834]: pam_unix(sshd:session): session closed for user core Nov 6 00:20:50.686123 systemd[1]: sshd@10-64.23.183.231:22-139.178.68.195:43044.service: Deactivated successfully. Nov 6 00:20:50.689750 systemd[1]: session-11.scope: Deactivated successfully. Nov 6 00:20:50.691556 systemd-logind[1474]: Session 11 logged out. Waiting for processes to exit. Nov 6 00:20:50.695786 systemd[1]: Started sshd@11-64.23.183.231:22-139.178.68.195:43050.service - OpenSSH per-connection server daemon (139.178.68.195:43050). Nov 6 00:20:50.697182 systemd-logind[1474]: Removed session 11. Nov 6 00:20:50.768522 sshd[4849]: Accepted publickey for core from 139.178.68.195 port 43050 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:20:50.769929 sshd-session[4849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:20:50.777554 systemd-logind[1474]: New session 12 of user core. Nov 6 00:20:50.786799 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 6 00:20:50.986520 sshd[4852]: Connection closed by 139.178.68.195 port 43050 Nov 6 00:20:50.990295 sshd-session[4849]: pam_unix(sshd:session): session closed for user core Nov 6 00:20:51.002493 systemd[1]: sshd@11-64.23.183.231:22-139.178.68.195:43050.service: Deactivated successfully. Nov 6 00:20:51.005923 systemd[1]: session-12.scope: Deactivated successfully. Nov 6 00:20:51.008030 systemd-logind[1474]: Session 12 logged out. Waiting for processes to exit. Nov 6 00:20:51.014368 systemd[1]: Started sshd@12-64.23.183.231:22-139.178.68.195:43062.service - OpenSSH per-connection server daemon (139.178.68.195:43062). Nov 6 00:20:51.016153 systemd-logind[1474]: Removed session 12. Nov 6 00:20:51.104246 sshd[4861]: Accepted publickey for core from 139.178.68.195 port 43062 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:20:51.105446 sshd-session[4861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:20:51.111562 systemd-logind[1474]: New session 13 of user core. Nov 6 00:20:51.120766 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 6 00:20:51.276494 sshd[4864]: Connection closed by 139.178.68.195 port 43062 Nov 6 00:20:51.277378 sshd-session[4861]: pam_unix(sshd:session): session closed for user core Nov 6 00:20:51.282820 systemd[1]: sshd@12-64.23.183.231:22-139.178.68.195:43062.service: Deactivated successfully. Nov 6 00:20:51.285536 systemd[1]: session-13.scope: Deactivated successfully. Nov 6 00:20:51.288127 systemd-logind[1474]: Session 13 logged out. Waiting for processes to exit. Nov 6 00:20:51.290928 systemd-logind[1474]: Removed session 13. Nov 6 00:20:54.978509 kubelet[2686]: E1106 00:20:54.978404 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:20:56.292304 systemd[1]: Started sshd@13-64.23.183.231:22-139.178.68.195:40372.service - OpenSSH per-connection server daemon (139.178.68.195:40372). Nov 6 00:20:56.364423 sshd[4880]: Accepted publickey for core from 139.178.68.195 port 40372 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:20:56.365995 sshd-session[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:20:56.372214 systemd-logind[1474]: New session 14 of user core. Nov 6 00:20:56.378810 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 6 00:20:56.648340 sshd[4883]: Connection closed by 139.178.68.195 port 40372 Nov 6 00:20:56.649019 sshd-session[4880]: pam_unix(sshd:session): session closed for user core Nov 6 00:20:56.657006 systemd[1]: sshd@13-64.23.183.231:22-139.178.68.195:40372.service: Deactivated successfully. Nov 6 00:20:56.662012 systemd[1]: session-14.scope: Deactivated successfully. Nov 6 00:20:56.663953 systemd-logind[1474]: Session 14 logged out. Waiting for processes to exit. Nov 6 00:20:56.667977 systemd-logind[1474]: Removed session 14. Nov 6 00:20:56.980861 kubelet[2686]: E1106 00:20:56.980797 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5765b6cb-hzncn" podUID="22d06117-b04a-43e9-87e1-aa14b7fcef4f" Nov 6 00:20:57.979032 kubelet[2686]: E1106 00:20:57.978973 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-fx2xl" podUID="d092cd15-7a3f-47f6-bde9-78a01defdd36" Nov 6 00:20:58.982575 kubelet[2686]: E1106 00:20:58.982510 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66d8bc69bc-fhr2k" podUID="cfa7aa38-883a-4fbf-a2b0-653ce5e79003" Nov 6 00:21:01.670872 systemd[1]: Started sshd@14-64.23.183.231:22-139.178.68.195:40384.service - OpenSSH per-connection server daemon (139.178.68.195:40384). Nov 6 00:21:01.791259 sshd[4898]: Accepted publickey for core from 139.178.68.195 port 40384 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:21:01.794157 sshd-session[4898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:21:01.801823 systemd-logind[1474]: New session 15 of user core. Nov 6 00:21:01.807899 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 6 00:21:02.005508 sshd[4901]: Connection closed by 139.178.68.195 port 40384 Nov 6 00:21:02.006208 sshd-session[4898]: pam_unix(sshd:session): session closed for user core Nov 6 00:21:02.014090 systemd[1]: sshd@14-64.23.183.231:22-139.178.68.195:40384.service: Deactivated successfully. Nov 6 00:21:02.019229 systemd[1]: session-15.scope: Deactivated successfully. Nov 6 00:21:02.022251 systemd-logind[1474]: Session 15 logged out. Waiting for processes to exit. Nov 6 00:21:02.025779 systemd-logind[1474]: Removed session 15. Nov 6 00:21:02.981196 kubelet[2686]: E1106 00:21:02.980340 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8495cfffbb-59fst" podUID="fa727a0e-c8e3-4851-8c43-fa33e679ce52" Nov 6 00:21:02.981196 kubelet[2686]: E1106 00:21:02.980881 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:21:02.982897 kubelet[2686]: E1106 00:21:02.982840 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5765b6cb-8c67x" podUID="221557dd-55b6-4b8e-a63d-bc09352c8c41" Nov 6 00:21:04.220978 containerd[1494]: time="2025-11-06T00:21:04.220696437Z" level=info msg="TaskExit event in podsandbox handler container_id:\"073e6cdb9eca86418c71a81787240824f8d9c13ee9c7a71a55dfe70893bcd73b\" id:\"7c094b765eacff3cfb4ef1e60033a8ddee758d23dc6f2f40dee856bdfacd393e\" pid:4925 exited_at:{seconds:1762388464 nanos:220260158}" Nov 6 00:21:04.979181 kubelet[2686]: E1106 00:21:04.979112 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:21:04.984075 kubelet[2686]: E1106 00:21:04.982975 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fsxfk" podUID="14475adc-4ac3-4f9b-9293-bb510ff52d31" Nov 6 00:21:07.023966 systemd[1]: Started sshd@15-64.23.183.231:22-139.178.68.195:34308.service - OpenSSH per-connection server daemon (139.178.68.195:34308). Nov 6 00:21:07.125612 sshd[4938]: Accepted publickey for core from 139.178.68.195 port 34308 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:21:07.131948 sshd-session[4938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:21:07.139611 systemd-logind[1474]: New session 16 of user core. Nov 6 00:21:07.147829 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 6 00:21:07.390276 sshd[4941]: Connection closed by 139.178.68.195 port 34308 Nov 6 00:21:07.390736 sshd-session[4938]: pam_unix(sshd:session): session closed for user core Nov 6 00:21:07.398568 systemd[1]: sshd@15-64.23.183.231:22-139.178.68.195:34308.service: Deactivated successfully. Nov 6 00:21:07.401934 systemd[1]: session-16.scope: Deactivated successfully. Nov 6 00:21:07.406895 systemd-logind[1474]: Session 16 logged out. Waiting for processes to exit. Nov 6 00:21:07.408324 systemd-logind[1474]: Removed session 16. Nov 6 00:21:10.981765 kubelet[2686]: E1106 00:21:10.981146 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-fx2xl" podUID="d092cd15-7a3f-47f6-bde9-78a01defdd36" Nov 6 00:21:11.979239 kubelet[2686]: E1106 00:21:11.979175 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5765b6cb-hzncn" podUID="22d06117-b04a-43e9-87e1-aa14b7fcef4f" Nov 6 00:21:12.411409 systemd[1]: Started sshd@16-64.23.183.231:22-139.178.68.195:34312.service - OpenSSH per-connection server daemon (139.178.68.195:34312). Nov 6 00:21:12.491507 sshd[4953]: Accepted publickey for core from 139.178.68.195 port 34312 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:21:12.493375 sshd-session[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:21:12.499253 systemd-logind[1474]: New session 17 of user core. Nov 6 00:21:12.507872 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 6 00:21:12.694688 sshd[4956]: Connection closed by 139.178.68.195 port 34312 Nov 6 00:21:12.694629 sshd-session[4953]: pam_unix(sshd:session): session closed for user core Nov 6 00:21:12.708153 systemd[1]: sshd@16-64.23.183.231:22-139.178.68.195:34312.service: Deactivated successfully. Nov 6 00:21:12.711327 systemd[1]: session-17.scope: Deactivated successfully. Nov 6 00:21:12.712417 systemd-logind[1474]: Session 17 logged out. Waiting for processes to exit. Nov 6 00:21:12.717591 systemd[1]: Started sshd@17-64.23.183.231:22-139.178.68.195:34322.service - OpenSSH per-connection server daemon (139.178.68.195:34322). Nov 6 00:21:12.719192 systemd-logind[1474]: Removed session 17. Nov 6 00:21:12.826603 sshd[4967]: Accepted publickey for core from 139.178.68.195 port 34322 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:21:12.830188 sshd-session[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:21:12.841671 systemd-logind[1474]: New session 18 of user core. Nov 6 00:21:12.844693 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 6 00:21:13.280425 sshd[4970]: Connection closed by 139.178.68.195 port 34322 Nov 6 00:21:13.282180 sshd-session[4967]: pam_unix(sshd:session): session closed for user core Nov 6 00:21:13.292449 systemd[1]: sshd@17-64.23.183.231:22-139.178.68.195:34322.service: Deactivated successfully. Nov 6 00:21:13.296426 systemd[1]: session-18.scope: Deactivated successfully. Nov 6 00:21:13.300676 systemd-logind[1474]: Session 18 logged out. Waiting for processes to exit. Nov 6 00:21:13.304799 systemd[1]: Started sshd@18-64.23.183.231:22-139.178.68.195:57454.service - OpenSSH per-connection server daemon (139.178.68.195:57454). Nov 6 00:21:13.309633 systemd-logind[1474]: Removed session 18. Nov 6 00:21:13.448035 sshd[4980]: Accepted publickey for core from 139.178.68.195 port 57454 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:21:13.450880 sshd-session[4980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:21:13.463693 systemd-logind[1474]: New session 19 of user core. Nov 6 00:21:13.469279 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 6 00:21:13.983276 kubelet[2686]: E1106 00:21:13.983182 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66d8bc69bc-fhr2k" podUID="cfa7aa38-883a-4fbf-a2b0-653ce5e79003" Nov 6 00:21:13.987654 kubelet[2686]: E1106 00:21:13.986289 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8495cfffbb-59fst" podUID="fa727a0e-c8e3-4851-8c43-fa33e679ce52" Nov 6 00:21:14.478078 sshd[4983]: Connection closed by 139.178.68.195 port 57454 Nov 6 00:21:14.477087 sshd-session[4980]: pam_unix(sshd:session): session closed for user core Nov 6 00:21:14.494917 systemd[1]: sshd@18-64.23.183.231:22-139.178.68.195:57454.service: Deactivated successfully. Nov 6 00:21:14.501859 systemd[1]: session-19.scope: Deactivated successfully. Nov 6 00:21:14.507086 systemd-logind[1474]: Session 19 logged out. Waiting for processes to exit. Nov 6 00:21:14.514653 systemd[1]: Started sshd@19-64.23.183.231:22-139.178.68.195:57468.service - OpenSSH per-connection server daemon (139.178.68.195:57468). Nov 6 00:21:14.516110 systemd-logind[1474]: Removed session 19. Nov 6 00:21:14.633315 sshd[5003]: Accepted publickey for core from 139.178.68.195 port 57468 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:21:14.636013 sshd-session[5003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:21:14.644414 systemd-logind[1474]: New session 20 of user core. Nov 6 00:21:14.646696 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 6 00:21:14.982838 kubelet[2686]: E1106 00:21:14.982687 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5765b6cb-8c67x" podUID="221557dd-55b6-4b8e-a63d-bc09352c8c41" Nov 6 00:21:15.244542 sshd[5006]: Connection closed by 139.178.68.195 port 57468 Nov 6 00:21:15.246144 sshd-session[5003]: pam_unix(sshd:session): session closed for user core Nov 6 00:21:15.261666 systemd[1]: sshd@19-64.23.183.231:22-139.178.68.195:57468.service: Deactivated successfully. Nov 6 00:21:15.266411 systemd[1]: session-20.scope: Deactivated successfully. Nov 6 00:21:15.269138 systemd-logind[1474]: Session 20 logged out. Waiting for processes to exit. Nov 6 00:21:15.278967 systemd[1]: Started sshd@20-64.23.183.231:22-139.178.68.195:57482.service - OpenSSH per-connection server daemon (139.178.68.195:57482). Nov 6 00:21:15.281230 systemd-logind[1474]: Removed session 20. Nov 6 00:21:15.365330 sshd[5016]: Accepted publickey for core from 139.178.68.195 port 57482 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:21:15.366157 sshd-session[5016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:21:15.374526 systemd-logind[1474]: New session 21 of user core. Nov 6 00:21:15.383679 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 6 00:21:15.557713 sshd[5019]: Connection closed by 139.178.68.195 port 57482 Nov 6 00:21:15.558566 sshd-session[5016]: pam_unix(sshd:session): session closed for user core Nov 6 00:21:15.564131 systemd-logind[1474]: Session 21 logged out. Waiting for processes to exit. Nov 6 00:21:15.564908 systemd[1]: sshd@20-64.23.183.231:22-139.178.68.195:57482.service: Deactivated successfully. Nov 6 00:21:15.569112 systemd[1]: session-21.scope: Deactivated successfully. Nov 6 00:21:15.572287 systemd-logind[1474]: Removed session 21. Nov 6 00:21:16.984508 kubelet[2686]: E1106 00:21:16.984159 2686 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 6 00:21:16.986384 kubelet[2686]: E1106 00:21:16.986212 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fsxfk" podUID="14475adc-4ac3-4f9b-9293-bb510ff52d31" Nov 6 00:21:20.575505 systemd[1]: Started sshd@21-64.23.183.231:22-139.178.68.195:57496.service - OpenSSH per-connection server daemon (139.178.68.195:57496). Nov 6 00:21:20.648598 sshd[5033]: Accepted publickey for core from 139.178.68.195 port 57496 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:21:20.651137 sshd-session[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:21:20.662700 systemd-logind[1474]: New session 22 of user core. Nov 6 00:21:20.664984 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 6 00:21:20.876968 sshd[5036]: Connection closed by 139.178.68.195 port 57496 Nov 6 00:21:20.877781 sshd-session[5033]: pam_unix(sshd:session): session closed for user core Nov 6 00:21:20.884986 systemd[1]: sshd@21-64.23.183.231:22-139.178.68.195:57496.service: Deactivated successfully. Nov 6 00:21:20.888657 systemd[1]: session-22.scope: Deactivated successfully. Nov 6 00:21:20.890677 systemd-logind[1474]: Session 22 logged out. Waiting for processes to exit. Nov 6 00:21:20.893456 systemd-logind[1474]: Removed session 22. Nov 6 00:21:22.980496 kubelet[2686]: E1106 00:21:22.980357 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5765b6cb-hzncn" podUID="22d06117-b04a-43e9-87e1-aa14b7fcef4f" Nov 6 00:21:22.984357 kubelet[2686]: E1106 00:21:22.984293 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-fx2xl" podUID="d092cd15-7a3f-47f6-bde9-78a01defdd36" Nov 6 00:21:25.896040 systemd[1]: Started sshd@22-64.23.183.231:22-139.178.68.195:49440.service - OpenSSH per-connection server daemon (139.178.68.195:49440). Nov 6 00:21:25.979819 kubelet[2686]: E1106 00:21:25.979740 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8495cfffbb-59fst" podUID="fa727a0e-c8e3-4851-8c43-fa33e679ce52" Nov 6 00:21:26.014584 sshd[5058]: Accepted publickey for core from 139.178.68.195 port 49440 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:21:26.015850 sshd-session[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:21:26.026612 systemd-logind[1474]: New session 23 of user core. Nov 6 00:21:26.031871 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 6 00:21:26.247911 sshd[5061]: Connection closed by 139.178.68.195 port 49440 Nov 6 00:21:26.248647 sshd-session[5058]: pam_unix(sshd:session): session closed for user core Nov 6 00:21:26.256779 systemd-logind[1474]: Session 23 logged out. Waiting for processes to exit. Nov 6 00:21:26.257068 systemd[1]: sshd@22-64.23.183.231:22-139.178.68.195:49440.service: Deactivated successfully. Nov 6 00:21:26.260316 systemd[1]: session-23.scope: Deactivated successfully. Nov 6 00:21:26.265517 systemd-logind[1474]: Removed session 23. Nov 6 00:21:27.981570 containerd[1494]: time="2025-11-06T00:21:27.980857382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:21:28.336168 containerd[1494]: time="2025-11-06T00:21:28.335762654Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:21:28.337514 containerd[1494]: time="2025-11-06T00:21:28.337133074Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:21:28.337664 containerd[1494]: time="2025-11-06T00:21:28.337589046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:21:28.337930 kubelet[2686]: E1106 00:21:28.337878 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:21:28.338810 kubelet[2686]: E1106 00:21:28.337951 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:21:28.338810 kubelet[2686]: E1106 00:21:28.338176 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-66d8bc69bc-fhr2k_calico-system(cfa7aa38-883a-4fbf-a2b0-653ce5e79003): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:21:28.339910 containerd[1494]: time="2025-11-06T00:21:28.339870215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:21:28.696504 containerd[1494]: time="2025-11-06T00:21:28.694792346Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:21:28.703564 containerd[1494]: time="2025-11-06T00:21:28.703030820Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:21:28.703564 containerd[1494]: time="2025-11-06T00:21:28.703202208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:21:28.703802 kubelet[2686]: E1106 00:21:28.703685 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:21:28.703802 kubelet[2686]: E1106 00:21:28.703737 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:21:28.703954 kubelet[2686]: E1106 00:21:28.703830 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-66d8bc69bc-fhr2k_calico-system(cfa7aa38-883a-4fbf-a2b0-653ce5e79003): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:21:28.703954 kubelet[2686]: E1106 00:21:28.703871 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66d8bc69bc-fhr2k" podUID="cfa7aa38-883a-4fbf-a2b0-653ce5e79003" Nov 6 00:21:29.980131 containerd[1494]: time="2025-11-06T00:21:29.980021324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:21:30.287853 containerd[1494]: time="2025-11-06T00:21:30.287093704Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:21:30.290155 containerd[1494]: time="2025-11-06T00:21:30.290002274Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:21:30.290155 containerd[1494]: time="2025-11-06T00:21:30.290118427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:21:30.290655 kubelet[2686]: E1106 00:21:30.290597 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:21:30.291126 kubelet[2686]: E1106 00:21:30.290673 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:21:30.291126 kubelet[2686]: E1106 00:21:30.290793 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5765b6cb-8c67x_calico-apiserver(221557dd-55b6-4b8e-a63d-bc09352c8c41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:21:30.291126 kubelet[2686]: E1106 00:21:30.290831 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5765b6cb-8c67x" podUID="221557dd-55b6-4b8e-a63d-bc09352c8c41" Nov 6 00:21:31.265991 systemd[1]: Started sshd@23-64.23.183.231:22-139.178.68.195:49442.service - OpenSSH per-connection server daemon (139.178.68.195:49442). Nov 6 00:21:31.364129 sshd[5077]: Accepted publickey for core from 139.178.68.195 port 49442 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:21:31.367700 sshd-session[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:21:31.378710 systemd-logind[1474]: New session 24 of user core. Nov 6 00:21:31.385805 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 6 00:21:31.620480 sshd[5080]: Connection closed by 139.178.68.195 port 49442 Nov 6 00:21:31.619524 sshd-session[5077]: pam_unix(sshd:session): session closed for user core Nov 6 00:21:31.628811 systemd[1]: sshd@23-64.23.183.231:22-139.178.68.195:49442.service: Deactivated successfully. Nov 6 00:21:31.631946 systemd[1]: session-24.scope: Deactivated successfully. Nov 6 00:21:31.633689 systemd-logind[1474]: Session 24 logged out. Waiting for processes to exit. Nov 6 00:21:31.636969 systemd-logind[1474]: Removed session 24. Nov 6 00:21:31.991713 containerd[1494]: time="2025-11-06T00:21:31.991412618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:21:32.317594 containerd[1494]: time="2025-11-06T00:21:32.315496485Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:21:32.319485 containerd[1494]: time="2025-11-06T00:21:32.318245356Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:21:32.319485 containerd[1494]: time="2025-11-06T00:21:32.318357654Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:21:32.332380 kubelet[2686]: E1106 00:21:32.332313 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:21:32.332380 kubelet[2686]: E1106 00:21:32.332382 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:21:32.332824 kubelet[2686]: E1106 00:21:32.332484 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-fsxfk_calico-system(14475adc-4ac3-4f9b-9293-bb510ff52d31): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:21:32.335982 containerd[1494]: time="2025-11-06T00:21:32.335604248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:21:32.655169 containerd[1494]: time="2025-11-06T00:21:32.654843613Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:21:32.656912 containerd[1494]: time="2025-11-06T00:21:32.656819736Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:21:32.657431 containerd[1494]: time="2025-11-06T00:21:32.656851898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:21:32.657821 kubelet[2686]: E1106 00:21:32.657734 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:21:32.657821 kubelet[2686]: E1106 00:21:32.657796 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:21:32.658548 kubelet[2686]: E1106 00:21:32.658078 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-fsxfk_calico-system(14475adc-4ac3-4f9b-9293-bb510ff52d31): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:21:32.658548 kubelet[2686]: E1106 00:21:32.658144 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fsxfk" podUID="14475adc-4ac3-4f9b-9293-bb510ff52d31" Nov 6 00:21:33.980497 containerd[1494]: time="2025-11-06T00:21:33.980072722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:21:34.255219 containerd[1494]: time="2025-11-06T00:21:34.254913059Z" level=info msg="TaskExit event in podsandbox handler container_id:\"073e6cdb9eca86418c71a81787240824f8d9c13ee9c7a71a55dfe70893bcd73b\" id:\"153ef8a76473f9639c74b5f0a7fb998b07c38d52240924638ea4bb9084c42483\" pid:5106 exited_at:{seconds:1762388494 nanos:254028470}" Nov 6 00:21:34.296328 containerd[1494]: time="2025-11-06T00:21:34.295698903Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:21:34.297124 containerd[1494]: time="2025-11-06T00:21:34.297081012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:21:34.297671 containerd[1494]: time="2025-11-06T00:21:34.297474800Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:21:34.297854 kubelet[2686]: E1106 00:21:34.297786 2686 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:21:34.297854 kubelet[2686]: E1106 00:21:34.297837 2686 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:21:34.298289 kubelet[2686]: E1106 00:21:34.297960 2686 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5765b6cb-hzncn_calico-apiserver(22d06117-b04a-43e9-87e1-aa14b7fcef4f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:21:34.298289 kubelet[2686]: E1106 00:21:34.298111 2686 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5765b6cb-hzncn" podUID="22d06117-b04a-43e9-87e1-aa14b7fcef4f"