Nov 4 23:54:28.252025 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 4 22:00:22 -00 2025 Nov 4 23:54:28.252067 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:54:28.252081 kernel: BIOS-provided physical RAM map: Nov 4 23:54:28.252088 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 4 23:54:28.252095 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 4 23:54:28.252103 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 4 23:54:28.252111 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Nov 4 23:54:28.252122 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Nov 4 23:54:28.252142 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 4 23:54:28.252156 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 4 23:54:28.252167 kernel: NX (Execute Disable) protection: active Nov 4 23:54:28.252178 kernel: APIC: Static calls initialized Nov 4 23:54:28.252189 kernel: SMBIOS 2.8 present. Nov 4 23:54:28.252197 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 4 23:54:28.252206 kernel: DMI: Memory slots populated: 1/1 Nov 4 23:54:28.252217 kernel: Hypervisor detected: KVM Nov 4 23:54:28.252228 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 4 23:54:28.252243 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 4 23:54:28.252256 kernel: kvm-clock: using sched offset of 3872501078 cycles Nov 4 23:54:28.252271 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 4 23:54:28.252298 kernel: tsc: Detected 2494.138 MHz processor Nov 4 23:54:28.252307 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 4 23:54:28.252316 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 4 23:54:28.252328 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 4 23:54:28.252337 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 4 23:54:28.252346 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 4 23:54:28.252354 kernel: ACPI: Early table checksum verification disabled Nov 4 23:54:28.252363 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Nov 4 23:54:28.252372 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:54:28.252381 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:54:28.252392 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:54:28.252400 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 4 23:54:28.252409 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:54:28.252417 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:54:28.252426 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:54:28.252434 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:54:28.252443 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 4 23:54:28.252454 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 4 23:54:28.252466 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 4 23:54:28.252474 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 4 23:54:28.252488 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 4 23:54:28.252496 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 4 23:54:28.252508 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 4 23:54:28.252517 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 4 23:54:28.252526 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 4 23:54:28.252535 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Nov 4 23:54:28.252545 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Nov 4 23:54:28.252553 kernel: Zone ranges: Nov 4 23:54:28.252565 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 4 23:54:28.252574 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Nov 4 23:54:28.252583 kernel: Normal empty Nov 4 23:54:28.252592 kernel: Device empty Nov 4 23:54:28.252601 kernel: Movable zone start for each node Nov 4 23:54:28.252610 kernel: Early memory node ranges Nov 4 23:54:28.252619 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 4 23:54:28.252628 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Nov 4 23:54:28.252640 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Nov 4 23:54:28.252649 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 4 23:54:28.252664 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 4 23:54:28.252677 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Nov 4 23:54:28.252691 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 4 23:54:28.252712 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 4 23:54:28.252723 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 4 23:54:28.252738 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 4 23:54:28.252747 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 4 23:54:28.252756 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 4 23:54:28.252767 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 4 23:54:28.252776 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 4 23:54:28.252785 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 4 23:54:28.252794 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 4 23:54:28.252806 kernel: TSC deadline timer available Nov 4 23:54:28.252815 kernel: CPU topo: Max. logical packages: 1 Nov 4 23:54:28.252824 kernel: CPU topo: Max. logical dies: 1 Nov 4 23:54:28.252833 kernel: CPU topo: Max. dies per package: 1 Nov 4 23:54:28.252842 kernel: CPU topo: Max. threads per core: 1 Nov 4 23:54:28.252851 kernel: CPU topo: Num. cores per package: 2 Nov 4 23:54:28.252860 kernel: CPU topo: Num. threads per package: 2 Nov 4 23:54:28.252869 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 4 23:54:28.252881 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 4 23:54:28.252890 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 4 23:54:28.252899 kernel: Booting paravirtualized kernel on KVM Nov 4 23:54:28.252908 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 4 23:54:28.252917 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 4 23:54:28.252927 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 4 23:54:28.252938 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 4 23:54:28.252950 kernel: pcpu-alloc: [0] 0 1 Nov 4 23:54:28.252958 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 4 23:54:28.252969 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:54:28.252979 kernel: random: crng init done Nov 4 23:54:28.252988 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 4 23:54:28.252997 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 4 23:54:28.253008 kernel: Fallback order for Node 0: 0 Nov 4 23:54:28.253017 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Nov 4 23:54:28.253034 kernel: Policy zone: DMA32 Nov 4 23:54:28.253048 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 4 23:54:28.253061 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 4 23:54:28.253074 kernel: Kernel/User page tables isolation: enabled Nov 4 23:54:28.253088 kernel: ftrace: allocating 40092 entries in 157 pages Nov 4 23:54:28.253101 kernel: ftrace: allocated 157 pages with 5 groups Nov 4 23:54:28.253114 kernel: Dynamic Preempt: voluntary Nov 4 23:54:28.253124 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 4 23:54:28.253138 kernel: rcu: RCU event tracing is enabled. Nov 4 23:54:28.253147 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 4 23:54:28.253162 kernel: Trampoline variant of Tasks RCU enabled. Nov 4 23:54:28.253177 kernel: Rude variant of Tasks RCU enabled. Nov 4 23:54:28.253193 kernel: Tracing variant of Tasks RCU enabled. Nov 4 23:54:28.253208 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 4 23:54:28.253216 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 4 23:54:28.253226 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 23:54:28.253238 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 23:54:28.253249 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 23:54:28.253259 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 4 23:54:28.253300 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 4 23:54:28.253319 kernel: Console: colour VGA+ 80x25 Nov 4 23:54:28.253332 kernel: printk: legacy console [tty0] enabled Nov 4 23:54:28.253345 kernel: printk: legacy console [ttyS0] enabled Nov 4 23:54:28.253359 kernel: ACPI: Core revision 20240827 Nov 4 23:54:28.253369 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 4 23:54:28.253388 kernel: APIC: Switch to symmetric I/O mode setup Nov 4 23:54:28.253400 kernel: x2apic enabled Nov 4 23:54:28.253410 kernel: APIC: Switched APIC routing to: physical x2apic Nov 4 23:54:28.253419 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 4 23:54:28.253429 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Nov 4 23:54:28.253445 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Nov 4 23:54:28.253455 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 4 23:54:28.253466 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 4 23:54:28.253476 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 4 23:54:28.253488 kernel: Spectre V2 : Mitigation: Retpolines Nov 4 23:54:28.253498 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 4 23:54:28.253902 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 4 23:54:28.253921 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 4 23:54:28.253931 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 4 23:54:28.253941 kernel: MDS: Mitigation: Clear CPU buffers Nov 4 23:54:28.253951 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 4 23:54:28.253965 kernel: active return thunk: its_return_thunk Nov 4 23:54:28.253974 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 4 23:54:28.253984 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 4 23:54:28.253994 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 4 23:54:28.254003 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 4 23:54:28.254013 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 4 23:54:28.254023 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 4 23:54:28.254036 kernel: Freeing SMP alternatives memory: 32K Nov 4 23:54:28.254045 kernel: pid_max: default: 32768 minimum: 301 Nov 4 23:54:28.254055 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 4 23:54:28.254065 kernel: landlock: Up and running. Nov 4 23:54:28.254074 kernel: SELinux: Initializing. Nov 4 23:54:28.254084 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 4 23:54:28.254094 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 4 23:54:28.254107 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 4 23:54:28.254117 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 4 23:54:28.254127 kernel: signal: max sigframe size: 1776 Nov 4 23:54:28.254136 kernel: rcu: Hierarchical SRCU implementation. Nov 4 23:54:28.254147 kernel: rcu: Max phase no-delay instances is 400. Nov 4 23:54:28.254156 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 4 23:54:28.254166 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 4 23:54:28.254178 kernel: smp: Bringing up secondary CPUs ... Nov 4 23:54:28.254192 kernel: smpboot: x86: Booting SMP configuration: Nov 4 23:54:28.254202 kernel: .... node #0, CPUs: #1 Nov 4 23:54:28.254211 kernel: smp: Brought up 1 node, 2 CPUs Nov 4 23:54:28.254221 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Nov 4 23:54:28.254240 kernel: Memory: 1989436K/2096612K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15936K init, 2108K bss, 102612K reserved, 0K cma-reserved) Nov 4 23:54:28.254255 kernel: devtmpfs: initialized Nov 4 23:54:28.254291 kernel: x86/mm: Memory block size: 128MB Nov 4 23:54:28.254308 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 4 23:54:28.254323 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 4 23:54:28.254338 kernel: pinctrl core: initialized pinctrl subsystem Nov 4 23:54:28.254351 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 4 23:54:28.254364 kernel: audit: initializing netlink subsys (disabled) Nov 4 23:54:28.254377 kernel: audit: type=2000 audit(1762300465.765:1): state=initialized audit_enabled=0 res=1 Nov 4 23:54:28.254396 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 4 23:54:28.254409 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 4 23:54:28.254421 kernel: cpuidle: using governor menu Nov 4 23:54:28.254435 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 4 23:54:28.254448 kernel: dca service started, version 1.12.1 Nov 4 23:54:28.254462 kernel: PCI: Using configuration type 1 for base access Nov 4 23:54:28.254479 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 4 23:54:28.254497 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 4 23:54:28.254510 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 4 23:54:28.254523 kernel: ACPI: Added _OSI(Module Device) Nov 4 23:54:28.254537 kernel: ACPI: Added _OSI(Processor Device) Nov 4 23:54:28.254550 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 4 23:54:28.254562 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 4 23:54:28.254575 kernel: ACPI: Interpreter enabled Nov 4 23:54:28.254592 kernel: ACPI: PM: (supports S0 S5) Nov 4 23:54:28.254605 kernel: ACPI: Using IOAPIC for interrupt routing Nov 4 23:54:28.254618 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 4 23:54:28.254633 kernel: PCI: Using E820 reservations for host bridge windows Nov 4 23:54:28.254648 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 4 23:54:28.254662 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 4 23:54:28.255017 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 4 23:54:28.255254 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 4 23:54:28.255822 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 4 23:54:28.257567 kernel: acpiphp: Slot [3] registered Nov 4 23:54:28.257590 kernel: acpiphp: Slot [4] registered Nov 4 23:54:28.257606 kernel: acpiphp: Slot [5] registered Nov 4 23:54:28.257620 kernel: acpiphp: Slot [6] registered Nov 4 23:54:28.257637 kernel: acpiphp: Slot [7] registered Nov 4 23:54:28.257647 kernel: acpiphp: Slot [8] registered Nov 4 23:54:28.257657 kernel: acpiphp: Slot [9] registered Nov 4 23:54:28.257667 kernel: acpiphp: Slot [10] registered Nov 4 23:54:28.257677 kernel: acpiphp: Slot [11] registered Nov 4 23:54:28.257687 kernel: acpiphp: Slot [12] registered Nov 4 23:54:28.257700 kernel: acpiphp: Slot [13] registered Nov 4 23:54:28.257709 kernel: acpiphp: Slot [14] registered Nov 4 23:54:28.257722 kernel: acpiphp: Slot [15] registered Nov 4 23:54:28.257732 kernel: acpiphp: Slot [16] registered Nov 4 23:54:28.257742 kernel: acpiphp: Slot [17] registered Nov 4 23:54:28.257751 kernel: acpiphp: Slot [18] registered Nov 4 23:54:28.257761 kernel: acpiphp: Slot [19] registered Nov 4 23:54:28.257771 kernel: acpiphp: Slot [20] registered Nov 4 23:54:28.257781 kernel: acpiphp: Slot [21] registered Nov 4 23:54:28.257793 kernel: acpiphp: Slot [22] registered Nov 4 23:54:28.257803 kernel: acpiphp: Slot [23] registered Nov 4 23:54:28.257812 kernel: acpiphp: Slot [24] registered Nov 4 23:54:28.257822 kernel: acpiphp: Slot [25] registered Nov 4 23:54:28.257831 kernel: acpiphp: Slot [26] registered Nov 4 23:54:28.257841 kernel: acpiphp: Slot [27] registered Nov 4 23:54:28.257851 kernel: acpiphp: Slot [28] registered Nov 4 23:54:28.257863 kernel: acpiphp: Slot [29] registered Nov 4 23:54:28.257873 kernel: acpiphp: Slot [30] registered Nov 4 23:54:28.257882 kernel: acpiphp: Slot [31] registered Nov 4 23:54:28.257892 kernel: PCI host bridge to bus 0000:00 Nov 4 23:54:28.258103 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 4 23:54:28.258228 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 4 23:54:28.258361 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 4 23:54:28.258485 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 4 23:54:28.258605 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 4 23:54:28.258723 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 4 23:54:28.258896 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Nov 4 23:54:28.259048 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Nov 4 23:54:28.259203 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Nov 4 23:54:28.259807 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Nov 4 23:54:28.261188 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Nov 4 23:54:28.261395 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Nov 4 23:54:28.261536 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Nov 4 23:54:28.261672 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Nov 4 23:54:28.261835 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Nov 4 23:54:28.262018 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Nov 4 23:54:28.262194 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Nov 4 23:54:28.264392 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 4 23:54:28.264552 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 4 23:54:28.264703 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Nov 4 23:54:28.264841 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Nov 4 23:54:28.264972 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Nov 4 23:54:28.265105 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Nov 4 23:54:28.265272 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Nov 4 23:54:28.265432 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 4 23:54:28.265627 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 4 23:54:28.265768 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Nov 4 23:54:28.265901 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Nov 4 23:54:28.266034 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Nov 4 23:54:28.266177 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 4 23:54:28.267363 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Nov 4 23:54:28.267515 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Nov 4 23:54:28.267650 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 4 23:54:28.267788 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Nov 4 23:54:28.267939 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Nov 4 23:54:28.268151 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Nov 4 23:54:28.268371 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 4 23:54:28.268573 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 4 23:54:28.268762 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Nov 4 23:54:28.268981 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Nov 4 23:54:28.269171 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Nov 4 23:54:28.270451 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 4 23:54:28.270682 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Nov 4 23:54:28.270893 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Nov 4 23:54:28.271104 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Nov 4 23:54:28.271354 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Nov 4 23:54:28.271571 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Nov 4 23:54:28.273066 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 4 23:54:28.273089 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 4 23:54:28.273100 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 4 23:54:28.273113 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 4 23:54:28.273123 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 4 23:54:28.273134 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 4 23:54:28.273149 kernel: iommu: Default domain type: Translated Nov 4 23:54:28.273159 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 4 23:54:28.273169 kernel: PCI: Using ACPI for IRQ routing Nov 4 23:54:28.273179 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 4 23:54:28.273189 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 4 23:54:28.273199 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Nov 4 23:54:28.273381 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 4 23:54:28.273525 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 4 23:54:28.273662 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 4 23:54:28.273674 kernel: vgaarb: loaded Nov 4 23:54:28.273685 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 4 23:54:28.273694 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 4 23:54:28.273704 kernel: clocksource: Switched to clocksource kvm-clock Nov 4 23:54:28.273714 kernel: VFS: Disk quotas dquot_6.6.0 Nov 4 23:54:28.273727 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 4 23:54:28.273737 kernel: pnp: PnP ACPI init Nov 4 23:54:28.273747 kernel: pnp: PnP ACPI: found 4 devices Nov 4 23:54:28.273757 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 4 23:54:28.273767 kernel: NET: Registered PF_INET protocol family Nov 4 23:54:28.273777 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 4 23:54:28.273790 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 4 23:54:28.273802 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 4 23:54:28.273812 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 4 23:54:28.273822 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 4 23:54:28.273832 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 4 23:54:28.273842 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 4 23:54:28.273852 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 4 23:54:28.273862 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 4 23:54:28.273874 kernel: NET: Registered PF_XDP protocol family Nov 4 23:54:28.274004 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 4 23:54:28.274125 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 4 23:54:28.274244 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 4 23:54:28.275335 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 4 23:54:28.275468 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 4 23:54:28.275609 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 4 23:54:28.275752 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 4 23:54:28.275766 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 4 23:54:28.275904 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 29580 usecs Nov 4 23:54:28.275917 kernel: PCI: CLS 0 bytes, default 64 Nov 4 23:54:28.275928 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 4 23:54:28.275938 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Nov 4 23:54:28.275948 kernel: Initialise system trusted keyrings Nov 4 23:54:28.275962 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 4 23:54:28.275971 kernel: Key type asymmetric registered Nov 4 23:54:28.275981 kernel: Asymmetric key parser 'x509' registered Nov 4 23:54:28.275991 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 4 23:54:28.276001 kernel: io scheduler mq-deadline registered Nov 4 23:54:28.276013 kernel: io scheduler kyber registered Nov 4 23:54:28.276024 kernel: io scheduler bfq registered Nov 4 23:54:28.276036 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 4 23:54:28.276046 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 4 23:54:28.276056 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 4 23:54:28.276066 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 4 23:54:28.276076 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 4 23:54:28.276086 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 4 23:54:28.276096 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 4 23:54:28.276108 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 4 23:54:28.276117 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 4 23:54:28.276361 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 4 23:54:28.276495 kernel: rtc_cmos 00:03: registered as rtc0 Nov 4 23:54:28.276661 kernel: rtc_cmos 00:03: setting system clock to 2025-11-04T23:54:26 UTC (1762300466) Nov 4 23:54:28.276786 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 4 23:54:28.276803 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 4 23:54:28.276814 kernel: intel_pstate: CPU model not supported Nov 4 23:54:28.276823 kernel: NET: Registered PF_INET6 protocol family Nov 4 23:54:28.276833 kernel: Segment Routing with IPv6 Nov 4 23:54:28.276843 kernel: In-situ OAM (IOAM) with IPv6 Nov 4 23:54:28.276853 kernel: NET: Registered PF_PACKET protocol family Nov 4 23:54:28.276862 kernel: Key type dns_resolver registered Nov 4 23:54:28.276875 kernel: IPI shorthand broadcast: enabled Nov 4 23:54:28.276885 kernel: sched_clock: Marking stable (1355005562, 178528778)->(1569881257, -36346917) Nov 4 23:54:28.276895 kernel: registered taskstats version 1 Nov 4 23:54:28.276980 kernel: Loading compiled-in X.509 certificates Nov 4 23:54:28.276992 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: ace064fb6689a15889f35c6439909c760a72ef44' Nov 4 23:54:28.277002 kernel: Demotion targets for Node 0: null Nov 4 23:54:28.277012 kernel: Key type .fscrypt registered Nov 4 23:54:28.277024 kernel: Key type fscrypt-provisioning registered Nov 4 23:54:28.277050 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 4 23:54:28.277063 kernel: ima: Allocated hash algorithm: sha1 Nov 4 23:54:28.277073 kernel: ima: No architecture policies found Nov 4 23:54:28.277083 kernel: clk: Disabling unused clocks Nov 4 23:54:28.277093 kernel: Freeing unused kernel image (initmem) memory: 15936K Nov 4 23:54:28.277104 kernel: Write protecting the kernel read-only data: 40960k Nov 4 23:54:28.277116 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 4 23:54:28.277126 kernel: Run /init as init process Nov 4 23:54:28.277137 kernel: with arguments: Nov 4 23:54:28.277147 kernel: /init Nov 4 23:54:28.277157 kernel: with environment: Nov 4 23:54:28.277171 kernel: HOME=/ Nov 4 23:54:28.277181 kernel: TERM=linux Nov 4 23:54:28.277191 kernel: SCSI subsystem initialized Nov 4 23:54:28.277203 kernel: libata version 3.00 loaded. Nov 4 23:54:28.277364 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 4 23:54:28.277543 kernel: scsi host0: ata_piix Nov 4 23:54:28.277697 kernel: scsi host1: ata_piix Nov 4 23:54:28.277711 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Nov 4 23:54:28.277726 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Nov 4 23:54:28.277736 kernel: ACPI: bus type USB registered Nov 4 23:54:28.277746 kernel: usbcore: registered new interface driver usbfs Nov 4 23:54:28.277756 kernel: usbcore: registered new interface driver hub Nov 4 23:54:28.277767 kernel: usbcore: registered new device driver usb Nov 4 23:54:28.277904 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 4 23:54:28.278038 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 4 23:54:28.278174 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 4 23:54:28.278324 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Nov 4 23:54:28.278489 kernel: hub 1-0:1.0: USB hub found Nov 4 23:54:28.278630 kernel: hub 1-0:1.0: 2 ports detected Nov 4 23:54:28.278791 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Nov 4 23:54:28.278925 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 4 23:54:28.278939 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 4 23:54:28.278950 kernel: GPT:16515071 != 125829119 Nov 4 23:54:28.278960 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 4 23:54:28.278973 kernel: GPT:16515071 != 125829119 Nov 4 23:54:28.278983 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 4 23:54:28.278993 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 4 23:54:28.279132 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Nov 4 23:54:28.279263 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Nov 4 23:54:28.279429 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Nov 4 23:54:28.279579 kernel: scsi host2: Virtio SCSI HBA Nov 4 23:54:28.279593 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 4 23:54:28.279604 kernel: device-mapper: uevent: version 1.0.3 Nov 4 23:54:28.279614 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 4 23:54:28.279625 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 4 23:54:28.279636 kernel: raid6: avx2x4 gen() 22270 MB/s Nov 4 23:54:28.279646 kernel: raid6: avx2x2 gen() 24918 MB/s Nov 4 23:54:28.279660 kernel: raid6: avx2x1 gen() 22580 MB/s Nov 4 23:54:28.279670 kernel: raid6: using algorithm avx2x2 gen() 24918 MB/s Nov 4 23:54:28.279681 kernel: raid6: .... xor() 20860 MB/s, rmw enabled Nov 4 23:54:28.279694 kernel: raid6: using avx2x2 recovery algorithm Nov 4 23:54:28.279705 kernel: xor: automatically using best checksumming function avx Nov 4 23:54:28.279715 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 4 23:54:28.279726 kernel: BTRFS: device fsid f719dc90-1cf7-4f08-a80f-0dda441372cc devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (162) Nov 4 23:54:28.279739 kernel: BTRFS info (device dm-0): first mount of filesystem f719dc90-1cf7-4f08-a80f-0dda441372cc Nov 4 23:54:28.279750 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:54:28.279760 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 4 23:54:28.279771 kernel: BTRFS info (device dm-0): enabling free space tree Nov 4 23:54:28.279781 kernel: loop: module loaded Nov 4 23:54:28.279792 kernel: loop0: detected capacity change from 0 to 100120 Nov 4 23:54:28.279802 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 4 23:54:28.279817 systemd[1]: Successfully made /usr/ read-only. Nov 4 23:54:28.279832 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 23:54:28.279843 systemd[1]: Detected virtualization kvm. Nov 4 23:54:28.279854 systemd[1]: Detected architecture x86-64. Nov 4 23:54:28.279864 systemd[1]: Running in initrd. Nov 4 23:54:28.279874 systemd[1]: No hostname configured, using default hostname. Nov 4 23:54:28.279888 systemd[1]: Hostname set to . Nov 4 23:54:28.279901 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 23:54:28.279911 systemd[1]: Queued start job for default target initrd.target. Nov 4 23:54:28.279922 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 23:54:28.279933 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:54:28.279943 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:54:28.279958 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 4 23:54:28.279969 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 23:54:28.279980 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 4 23:54:28.279991 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 4 23:54:28.280002 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:54:28.280013 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:54:28.280026 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 4 23:54:28.280037 systemd[1]: Reached target paths.target - Path Units. Nov 4 23:54:28.280048 systemd[1]: Reached target slices.target - Slice Units. Nov 4 23:54:28.280059 systemd[1]: Reached target swap.target - Swaps. Nov 4 23:54:28.280070 systemd[1]: Reached target timers.target - Timer Units. Nov 4 23:54:28.280080 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 23:54:28.280091 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 23:54:28.280104 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 4 23:54:28.280115 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 4 23:54:28.280139 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:54:28.280160 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 23:54:28.280176 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:54:28.280191 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 23:54:28.280202 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 4 23:54:28.280215 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 4 23:54:28.280226 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 23:54:28.280237 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 4 23:54:28.280249 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 4 23:54:28.280260 systemd[1]: Starting systemd-fsck-usr.service... Nov 4 23:54:28.280271 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 23:54:28.280294 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 23:54:28.280305 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:54:28.280317 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 4 23:54:28.280328 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:54:28.280341 systemd[1]: Finished systemd-fsck-usr.service. Nov 4 23:54:28.280352 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 23:54:28.280398 systemd-journald[298]: Collecting audit messages is disabled. Nov 4 23:54:28.280426 systemd-journald[298]: Journal started Nov 4 23:54:28.280452 systemd-journald[298]: Runtime Journal (/run/log/journal/d7ef0ce57e08492b95eb226662ff1db0) is 4.9M, max 39.2M, 34.3M free. Nov 4 23:54:28.282295 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 23:54:28.285620 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 23:54:28.301469 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 4 23:54:28.307877 kernel: Bridge firewalling registered Nov 4 23:54:28.308326 systemd-modules-load[299]: Inserted module 'br_netfilter' Nov 4 23:54:28.309378 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 23:54:28.310153 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 23:54:28.316046 systemd-tmpfiles[311]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 4 23:54:28.316997 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:54:28.324242 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 23:54:28.326425 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:54:28.344385 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:54:28.400186 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:54:28.401624 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:54:28.404426 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 4 23:54:28.408472 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 23:54:28.436580 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 23:54:28.439753 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 4 23:54:28.475119 systemd-resolved[326]: Positive Trust Anchors: Nov 4 23:54:28.475134 systemd-resolved[326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 23:54:28.475138 systemd-resolved[326]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 23:54:28.475176 systemd-resolved[326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 23:54:28.496157 dracut-cmdline[340]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:54:28.510024 systemd-resolved[326]: Defaulting to hostname 'linux'. Nov 4 23:54:28.511385 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 23:54:28.512156 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:54:28.596311 kernel: Loading iSCSI transport class v2.0-870. Nov 4 23:54:28.613322 kernel: iscsi: registered transport (tcp) Nov 4 23:54:28.640599 kernel: iscsi: registered transport (qla4xxx) Nov 4 23:54:28.640681 kernel: QLogic iSCSI HBA Driver Nov 4 23:54:28.669630 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 23:54:28.690721 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:54:28.694022 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 23:54:28.754354 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 4 23:54:28.756597 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 4 23:54:28.757815 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 4 23:54:28.799241 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 4 23:54:28.803457 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:54:28.830826 systemd-udevd[581]: Using default interface naming scheme 'v257'. Nov 4 23:54:28.842846 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:54:28.848105 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 4 23:54:28.878387 dracut-pre-trigger[646]: rd.md=0: removing MD RAID activation Nov 4 23:54:28.879651 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 23:54:28.883972 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 23:54:28.915626 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 23:54:28.918470 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 23:54:28.942648 systemd-networkd[687]: lo: Link UP Nov 4 23:54:28.942659 systemd-networkd[687]: lo: Gained carrier Nov 4 23:54:28.943871 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 23:54:28.944451 systemd[1]: Reached target network.target - Network. Nov 4 23:54:29.004554 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:54:29.008501 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 4 23:54:29.131823 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 4 23:54:29.144532 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 23:54:29.158001 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 4 23:54:29.169838 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 4 23:54:29.176811 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 4 23:54:29.187301 kernel: cryptd: max_cpu_qlen set to 1000 Nov 4 23:54:29.204762 disk-uuid[744]: Primary Header is updated. Nov 4 23:54:29.204762 disk-uuid[744]: Secondary Entries is updated. Nov 4 23:54:29.204762 disk-uuid[744]: Secondary Header is updated. Nov 4 23:54:29.227215 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:54:29.227410 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:54:29.228061 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:54:29.235688 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:54:29.249466 kernel: AES CTR mode by8 optimization enabled Nov 4 23:54:29.252608 systemd-networkd[687]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:54:29.252621 systemd-networkd[687]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 23:54:29.254147 systemd-networkd[687]: eth1: Link UP Nov 4 23:54:29.254897 systemd-networkd[687]: eth1: Gained carrier Nov 4 23:54:29.254912 systemd-networkd[687]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:54:29.272406 systemd-networkd[687]: eth1: DHCPv4 address 10.124.0.12/20 acquired from 169.254.169.253 Nov 4 23:54:29.296539 systemd-networkd[687]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Nov 4 23:54:29.296546 systemd-networkd[687]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 4 23:54:29.300628 systemd-networkd[687]: eth0: Link UP Nov 4 23:54:29.300895 systemd-networkd[687]: eth0: Gained carrier Nov 4 23:54:29.300913 systemd-networkd[687]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Nov 4 23:54:29.322266 systemd-networkd[687]: eth0: DHCPv4 address 64.227.96.36/20, gateway 64.227.96.1 acquired from 169.254.169.253 Nov 4 23:54:29.370302 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 4 23:54:29.454427 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:54:29.461288 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 4 23:54:29.462693 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 23:54:29.464596 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:54:29.465160 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 23:54:29.468468 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 4 23:54:29.514608 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 4 23:54:30.332305 disk-uuid[746]: Warning: The kernel is still using the old partition table. Nov 4 23:54:30.332305 disk-uuid[746]: The new table will be used at the next reboot or after you Nov 4 23:54:30.332305 disk-uuid[746]: run partprobe(8) or kpartx(8) Nov 4 23:54:30.332305 disk-uuid[746]: The operation has completed successfully. Nov 4 23:54:30.344038 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 4 23:54:30.344218 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 4 23:54:30.347118 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 4 23:54:30.354422 systemd-networkd[687]: eth1: Gained IPv6LL Nov 4 23:54:30.377673 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (835) Nov 4 23:54:30.377740 kernel: BTRFS info (device vda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:54:30.379689 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:54:30.386125 kernel: BTRFS info (device vda6): turning on async discard Nov 4 23:54:30.386209 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 23:54:30.394369 kernel: BTRFS info (device vda6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:54:30.395920 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 4 23:54:30.399472 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 4 23:54:30.603918 ignition[854]: Ignition 2.22.0 Nov 4 23:54:30.603938 ignition[854]: Stage: fetch-offline Nov 4 23:54:30.603996 ignition[854]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:54:30.604015 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 4 23:54:30.604306 ignition[854]: parsed url from cmdline: "" Nov 4 23:54:30.604315 ignition[854]: no config URL provided Nov 4 23:54:30.604326 ignition[854]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 23:54:30.606988 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 23:54:30.604350 ignition[854]: no config at "/usr/lib/ignition/user.ign" Nov 4 23:54:30.604361 ignition[854]: failed to fetch config: resource requires networking Nov 4 23:54:30.604653 ignition[854]: Ignition finished successfully Nov 4 23:54:30.610568 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 4 23:54:30.659898 ignition[860]: Ignition 2.22.0 Nov 4 23:54:30.659917 ignition[860]: Stage: fetch Nov 4 23:54:30.660076 ignition[860]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:54:30.660085 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 4 23:54:30.660246 ignition[860]: parsed url from cmdline: "" Nov 4 23:54:30.660251 ignition[860]: no config URL provided Nov 4 23:54:30.660257 ignition[860]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 23:54:30.660266 ignition[860]: no config at "/usr/lib/ignition/user.ign" Nov 4 23:54:30.660307 ignition[860]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 4 23:54:30.674595 ignition[860]: GET result: OK Nov 4 23:54:30.674771 ignition[860]: parsing config with SHA512: 611e12994dcad0a84c49354f2c7c29b5b5a91efc73f9ad6d74aaa4ed39f0ec66415f87f2137d3f3b197530481568e5ca76fc2c915277e63fc4a6679e97760355 Nov 4 23:54:30.679508 unknown[860]: fetched base config from "system" Nov 4 23:54:30.680170 ignition[860]: fetch: fetch complete Nov 4 23:54:30.679522 unknown[860]: fetched base config from "system" Nov 4 23:54:30.680178 ignition[860]: fetch: fetch passed Nov 4 23:54:30.679531 unknown[860]: fetched user config from "digitalocean" Nov 4 23:54:30.680257 ignition[860]: Ignition finished successfully Nov 4 23:54:30.685944 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 4 23:54:30.688491 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 4 23:54:30.734175 ignition[867]: Ignition 2.22.0 Nov 4 23:54:30.734196 ignition[867]: Stage: kargs Nov 4 23:54:30.734420 ignition[867]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:54:30.734432 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 4 23:54:30.735570 ignition[867]: kargs: kargs passed Nov 4 23:54:30.737395 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 4 23:54:30.735627 ignition[867]: Ignition finished successfully Nov 4 23:54:30.739755 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 4 23:54:30.785493 ignition[874]: Ignition 2.22.0 Nov 4 23:54:30.786321 ignition[874]: Stage: disks Nov 4 23:54:30.786543 ignition[874]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:54:30.786553 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 4 23:54:30.789348 ignition[874]: disks: disks passed Nov 4 23:54:30.789943 ignition[874]: Ignition finished successfully Nov 4 23:54:30.797611 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 4 23:54:30.798393 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 4 23:54:30.798982 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 4 23:54:30.799869 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 23:54:30.801001 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 23:54:30.801922 systemd[1]: Reached target basic.target - Basic System. Nov 4 23:54:30.803922 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 4 23:54:30.843359 systemd-fsck[882]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 4 23:54:30.846682 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 4 23:54:30.849908 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 4 23:54:30.983303 kernel: EXT4-fs (vda9): mounted filesystem cfb29ed0-6faf-41a8-b421-3abc514e4975 r/w with ordered data mode. Quota mode: none. Nov 4 23:54:30.984367 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 4 23:54:30.985933 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 4 23:54:30.989261 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 23:54:30.991756 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 4 23:54:31.006465 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Nov 4 23:54:31.012434 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 4 23:54:31.015300 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (890) Nov 4 23:54:31.015427 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 4 23:54:31.021428 kernel: BTRFS info (device vda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:54:31.021466 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:54:31.016593 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 23:54:31.023086 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 4 23:54:31.028489 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 4 23:54:31.037305 kernel: BTRFS info (device vda6): turning on async discard Nov 4 23:54:31.037387 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 23:54:31.041199 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 23:54:31.126994 coreos-metadata[893]: Nov 04 23:54:31.126 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 4 23:54:31.132263 coreos-metadata[892]: Nov 04 23:54:31.132 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 4 23:54:31.138255 coreos-metadata[893]: Nov 04 23:54:31.138 INFO Fetch successful Nov 4 23:54:31.143416 coreos-metadata[892]: Nov 04 23:54:31.143 INFO Fetch successful Nov 4 23:54:31.144683 coreos-metadata[893]: Nov 04 23:54:31.144 INFO wrote hostname ci-4487.0.0-n-936e1cfeba to /sysroot/etc/hostname Nov 4 23:54:31.145616 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 4 23:54:31.148158 initrd-setup-root[920]: cut: /sysroot/etc/passwd: No such file or directory Nov 4 23:54:31.158177 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Nov 4 23:54:31.159059 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Nov 4 23:54:31.160557 initrd-setup-root[928]: cut: /sysroot/etc/group: No such file or directory Nov 4 23:54:31.164823 initrd-setup-root[936]: cut: /sysroot/etc/shadow: No such file or directory Nov 4 23:54:31.170330 initrd-setup-root[943]: cut: /sysroot/etc/gshadow: No such file or directory Nov 4 23:54:31.294927 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 4 23:54:31.297232 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 4 23:54:31.299005 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 4 23:54:31.316366 systemd-networkd[687]: eth0: Gained IPv6LL Nov 4 23:54:31.328303 kernel: BTRFS info (device vda6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:54:31.342465 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 4 23:54:31.364398 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 4 23:54:31.365512 ignition[1012]: INFO : Ignition 2.22.0 Nov 4 23:54:31.365512 ignition[1012]: INFO : Stage: mount Nov 4 23:54:31.367456 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:54:31.367456 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 4 23:54:31.367456 ignition[1012]: INFO : mount: mount passed Nov 4 23:54:31.367456 ignition[1012]: INFO : Ignition finished successfully Nov 4 23:54:31.367744 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 4 23:54:31.370396 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 4 23:54:31.391763 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 23:54:31.416315 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1022) Nov 4 23:54:31.419534 kernel: BTRFS info (device vda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:54:31.419606 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:54:31.425543 kernel: BTRFS info (device vda6): turning on async discard Nov 4 23:54:31.425622 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 23:54:31.429700 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 23:54:31.471018 ignition[1038]: INFO : Ignition 2.22.0 Nov 4 23:54:31.471018 ignition[1038]: INFO : Stage: files Nov 4 23:54:31.472376 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:54:31.472376 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 4 23:54:31.472376 ignition[1038]: DEBUG : files: compiled without relabeling support, skipping Nov 4 23:54:31.474415 ignition[1038]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 4 23:54:31.474415 ignition[1038]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 4 23:54:31.477664 ignition[1038]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 4 23:54:31.478413 ignition[1038]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 4 23:54:31.479232 ignition[1038]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 4 23:54:31.479155 unknown[1038]: wrote ssh authorized keys file for user: core Nov 4 23:54:31.480792 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 23:54:31.481665 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 4 23:54:31.595145 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 4 23:54:31.749674 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 23:54:31.750872 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 4 23:54:31.750872 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 4 23:54:31.750872 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 4 23:54:31.750872 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 4 23:54:31.750872 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 23:54:31.750872 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 23:54:31.750872 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 23:54:31.750872 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 23:54:31.759761 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 23:54:31.759761 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 23:54:31.759761 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 23:54:31.759761 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 23:54:31.759761 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 23:54:31.759761 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 4 23:54:32.174731 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 4 23:54:32.515030 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 23:54:32.515030 ignition[1038]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 4 23:54:32.517851 ignition[1038]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 23:54:32.519218 ignition[1038]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 23:54:32.519218 ignition[1038]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 4 23:54:32.521058 ignition[1038]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 4 23:54:32.521058 ignition[1038]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 4 23:54:32.521058 ignition[1038]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 4 23:54:32.521058 ignition[1038]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 4 23:54:32.521058 ignition[1038]: INFO : files: files passed Nov 4 23:54:32.521058 ignition[1038]: INFO : Ignition finished successfully Nov 4 23:54:32.523781 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 4 23:54:32.527528 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 4 23:54:32.529464 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 4 23:54:32.551020 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 4 23:54:32.551199 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 4 23:54:32.565480 initrd-setup-root-after-ignition[1071]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:54:32.565480 initrd-setup-root-after-ignition[1071]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:54:32.567959 initrd-setup-root-after-ignition[1075]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:54:32.568528 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 23:54:32.569863 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 4 23:54:32.571418 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 4 23:54:32.627767 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 4 23:54:32.627885 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 4 23:54:32.629304 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 4 23:54:32.630167 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 4 23:54:32.631536 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 4 23:54:32.632681 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 4 23:54:32.662468 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 23:54:32.665365 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 4 23:54:32.695257 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 23:54:32.695467 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:54:32.696798 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:54:32.697802 systemd[1]: Stopped target timers.target - Timer Units. Nov 4 23:54:32.698802 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 4 23:54:32.699056 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 23:54:32.700699 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 4 23:54:32.701351 systemd[1]: Stopped target basic.target - Basic System. Nov 4 23:54:32.702180 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 4 23:54:32.703020 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 23:54:32.704087 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 4 23:54:32.705050 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 4 23:54:32.706116 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 4 23:54:32.707013 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 23:54:32.708093 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 4 23:54:32.709075 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 4 23:54:32.710065 systemd[1]: Stopped target swap.target - Swaps. Nov 4 23:54:32.710922 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 4 23:54:32.711065 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 4 23:54:32.712085 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:54:32.712754 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:54:32.713723 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 4 23:54:32.713935 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:54:32.714792 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 4 23:54:32.714937 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 4 23:54:32.716092 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 4 23:54:32.716244 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 23:54:32.717538 systemd[1]: ignition-files.service: Deactivated successfully. Nov 4 23:54:32.717641 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 4 23:54:32.718452 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 4 23:54:32.718564 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 4 23:54:32.720401 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 4 23:54:32.724575 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 4 23:54:32.727186 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 4 23:54:32.727445 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:54:32.731536 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 4 23:54:32.731744 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:54:32.734219 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 4 23:54:32.734440 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 23:54:32.747593 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 4 23:54:32.747716 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 4 23:54:32.766048 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 4 23:54:32.777840 ignition[1095]: INFO : Ignition 2.22.0 Nov 4 23:54:32.791109 ignition[1095]: INFO : Stage: umount Nov 4 23:54:32.791109 ignition[1095]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:54:32.791109 ignition[1095]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 4 23:54:32.791109 ignition[1095]: INFO : umount: umount passed Nov 4 23:54:32.791109 ignition[1095]: INFO : Ignition finished successfully Nov 4 23:54:32.795584 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 4 23:54:32.795735 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 4 23:54:32.804889 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 4 23:54:32.804994 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 4 23:54:32.807156 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 4 23:54:32.807272 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 4 23:54:32.807950 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 4 23:54:32.808050 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 4 23:54:32.809300 systemd[1]: Stopped target network.target - Network. Nov 4 23:54:32.809854 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 4 23:54:32.809959 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 23:54:32.810637 systemd[1]: Stopped target paths.target - Path Units. Nov 4 23:54:32.811161 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 4 23:54:32.813387 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:54:32.822792 systemd[1]: Stopped target slices.target - Slice Units. Nov 4 23:54:32.823361 systemd[1]: Stopped target sockets.target - Socket Units. Nov 4 23:54:32.826360 systemd[1]: iscsid.socket: Deactivated successfully. Nov 4 23:54:32.826442 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 23:54:32.827069 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 4 23:54:32.827142 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 23:54:32.827905 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 4 23:54:32.828007 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 4 23:54:32.828638 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 4 23:54:32.828714 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 4 23:54:32.833384 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 4 23:54:32.834021 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 4 23:54:32.835889 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 4 23:54:32.836036 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 4 23:54:32.841932 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 4 23:54:32.842068 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 4 23:54:32.845915 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 4 23:54:32.846086 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 4 23:54:32.857548 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 4 23:54:32.857765 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 4 23:54:32.863259 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 4 23:54:32.864777 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 4 23:54:32.865493 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:54:32.869468 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 4 23:54:32.870115 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 4 23:54:32.870223 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 23:54:32.870948 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 23:54:32.871028 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:54:32.874467 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 4 23:54:32.874564 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 4 23:54:32.876350 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:54:32.887561 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 4 23:54:32.890664 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:54:32.892903 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 4 23:54:32.893030 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 4 23:54:32.896352 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 4 23:54:32.896428 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:54:32.897062 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 4 23:54:32.897160 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 4 23:54:32.900630 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 4 23:54:32.900988 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 4 23:54:32.901978 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 4 23:54:32.902068 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 23:54:32.907232 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 4 23:54:32.910093 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 4 23:54:32.910232 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:54:32.911011 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 4 23:54:32.911104 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:54:32.914580 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 4 23:54:32.914692 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 23:54:32.918743 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 4 23:54:32.918854 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:54:32.922481 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:54:32.922592 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:54:32.939115 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 4 23:54:32.944256 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 4 23:54:32.948446 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 4 23:54:32.948646 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 4 23:54:32.950552 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 4 23:54:32.952797 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 4 23:54:32.977428 systemd[1]: Switching root. Nov 4 23:54:33.029826 systemd-journald[298]: Journal stopped Nov 4 23:54:34.379185 systemd-journald[298]: Received SIGTERM from PID 1 (systemd). Nov 4 23:54:34.379388 kernel: SELinux: policy capability network_peer_controls=1 Nov 4 23:54:34.379410 kernel: SELinux: policy capability open_perms=1 Nov 4 23:54:34.379429 kernel: SELinux: policy capability extended_socket_class=1 Nov 4 23:54:34.379451 kernel: SELinux: policy capability always_check_network=0 Nov 4 23:54:34.379465 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 4 23:54:34.379480 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 4 23:54:34.379493 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 4 23:54:34.379509 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 4 23:54:34.379521 kernel: SELinux: policy capability userspace_initial_context=0 Nov 4 23:54:34.379535 kernel: audit: type=1403 audit(1762300473.194:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 4 23:54:34.379550 systemd[1]: Successfully loaded SELinux policy in 88.678ms. Nov 4 23:54:34.379577 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.334ms. Nov 4 23:54:34.379592 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 23:54:34.379610 systemd[1]: Detected virtualization kvm. Nov 4 23:54:34.379631 systemd[1]: Detected architecture x86-64. Nov 4 23:54:34.379651 systemd[1]: Detected first boot. Nov 4 23:54:34.379674 systemd[1]: Hostname set to . Nov 4 23:54:34.379696 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 23:54:34.379713 zram_generator::config[1139]: No configuration found. Nov 4 23:54:34.379743 kernel: Guest personality initialized and is inactive Nov 4 23:54:34.379768 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 4 23:54:34.379788 kernel: Initialized host personality Nov 4 23:54:34.379811 kernel: NET: Registered PF_VSOCK protocol family Nov 4 23:54:34.379834 systemd[1]: Populated /etc with preset unit settings. Nov 4 23:54:34.379851 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 4 23:54:34.379870 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 4 23:54:34.379892 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 4 23:54:34.379919 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 4 23:54:34.379939 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 4 23:54:34.379959 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 4 23:54:34.379981 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 4 23:54:34.380001 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 4 23:54:34.380022 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 4 23:54:34.380044 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 4 23:54:34.380070 systemd[1]: Created slice user.slice - User and Session Slice. Nov 4 23:54:34.380092 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:54:34.380113 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:54:34.380150 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 4 23:54:34.380171 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 4 23:54:34.380194 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 4 23:54:34.380226 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 23:54:34.380249 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 4 23:54:34.382007 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:54:34.382067 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:54:34.382083 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 4 23:54:34.382099 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 4 23:54:34.382120 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 4 23:54:34.382134 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 4 23:54:34.382147 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:54:34.382161 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 23:54:34.382174 systemd[1]: Reached target slices.target - Slice Units. Nov 4 23:54:34.382188 systemd[1]: Reached target swap.target - Swaps. Nov 4 23:54:34.382201 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 4 23:54:34.382217 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 4 23:54:34.382232 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 4 23:54:34.382245 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:54:34.382259 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 23:54:34.386295 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:54:34.386382 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 4 23:54:34.386397 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 4 23:54:34.386411 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 4 23:54:34.386432 systemd[1]: Mounting media.mount - External Media Directory... Nov 4 23:54:34.386451 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:54:34.386472 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 4 23:54:34.386493 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 4 23:54:34.386513 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 4 23:54:34.386528 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 4 23:54:34.386546 systemd[1]: Reached target machines.target - Containers. Nov 4 23:54:34.386560 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 4 23:54:34.386573 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:54:34.386586 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 23:54:34.386600 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 4 23:54:34.386613 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:54:34.386627 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 23:54:34.386643 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:54:34.386656 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 4 23:54:34.386671 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:54:34.386686 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 4 23:54:34.386700 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 4 23:54:34.386713 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 4 23:54:34.386727 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 4 23:54:34.386744 systemd[1]: Stopped systemd-fsck-usr.service. Nov 4 23:54:34.386758 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:54:34.386771 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 23:54:34.386785 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 23:54:34.386799 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 23:54:34.386812 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 4 23:54:34.386826 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 4 23:54:34.386843 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 23:54:34.386857 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:54:34.386870 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 4 23:54:34.386887 kernel: fuse: init (API version 7.41) Nov 4 23:54:34.386901 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 4 23:54:34.386914 systemd[1]: Mounted media.mount - External Media Directory. Nov 4 23:54:34.386928 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 4 23:54:34.386942 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 4 23:54:34.386955 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 4 23:54:34.386968 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:54:34.386987 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 4 23:54:34.387000 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 4 23:54:34.387015 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:54:34.387028 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:54:34.387042 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:54:34.387058 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:54:34.387072 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 4 23:54:34.387087 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 4 23:54:34.387100 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:54:34.387112 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:54:34.387125 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 23:54:34.387139 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 4 23:54:34.387155 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 4 23:54:34.387169 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 23:54:34.387183 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:54:34.387196 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 23:54:34.387210 kernel: ACPI: bus type drm_connector registered Nov 4 23:54:34.387223 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:54:34.387236 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 4 23:54:34.387253 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 23:54:34.387270 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 23:54:34.387351 systemd-journald[1209]: Collecting audit messages is disabled. Nov 4 23:54:34.387378 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 4 23:54:34.387391 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 4 23:54:34.387405 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 4 23:54:34.387421 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 23:54:34.387436 systemd-journald[1209]: Journal started Nov 4 23:54:34.387460 systemd-journald[1209]: Runtime Journal (/run/log/journal/d7ef0ce57e08492b95eb226662ff1db0) is 4.9M, max 39.2M, 34.3M free. Nov 4 23:54:33.980177 systemd[1]: Queued start job for default target multi-user.target. Nov 4 23:54:34.003971 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 4 23:54:34.004538 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 4 23:54:34.393399 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 4 23:54:34.393456 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 4 23:54:34.396209 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 23:54:34.400083 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 4 23:54:34.404318 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:54:34.411306 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 4 23:54:34.416306 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:54:34.422374 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 4 23:54:34.430310 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 4 23:54:34.432311 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 23:54:34.437999 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 4 23:54:34.456579 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Nov 4 23:54:34.458342 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Nov 4 23:54:34.459733 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:54:34.472875 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 4 23:54:34.474730 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:54:34.476094 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 23:54:34.481300 kernel: loop1: detected capacity change from 0 to 110984 Nov 4 23:54:34.482680 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 4 23:54:34.487565 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 4 23:54:34.491539 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 4 23:54:34.495605 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 4 23:54:34.529700 kernel: loop2: detected capacity change from 0 to 8 Nov 4 23:54:34.542841 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 4 23:54:34.543902 systemd-journald[1209]: Time spent on flushing to /var/log/journal/d7ef0ce57e08492b95eb226662ff1db0 is 48.920ms for 1007 entries. Nov 4 23:54:34.543902 systemd-journald[1209]: System Journal (/var/log/journal/d7ef0ce57e08492b95eb226662ff1db0) is 8M, max 163.5M, 155.5M free. Nov 4 23:54:34.601035 systemd-journald[1209]: Received client request to flush runtime journal. Nov 4 23:54:34.601096 kernel: loop3: detected capacity change from 0 to 128048 Nov 4 23:54:34.601123 kernel: loop4: detected capacity change from 0 to 229808 Nov 4 23:54:34.575747 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 4 23:54:34.579995 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 23:54:34.584843 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 23:54:34.605472 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 4 23:54:34.608172 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 4 23:54:34.631325 kernel: loop5: detected capacity change from 0 to 110984 Nov 4 23:54:34.633162 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Nov 4 23:54:34.633185 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Nov 4 23:54:34.650305 kernel: loop6: detected capacity change from 0 to 8 Nov 4 23:54:34.652812 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:54:34.656297 kernel: loop7: detected capacity change from 0 to 128048 Nov 4 23:54:34.673301 kernel: loop1: detected capacity change from 0 to 229808 Nov 4 23:54:34.679046 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 4 23:54:34.691175 (sd-merge)[1292]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-digitalocean.raw'. Nov 4 23:54:34.698010 (sd-merge)[1292]: Merged extensions into '/usr'. Nov 4 23:54:34.705780 systemd[1]: Reload requested from client PID 1249 ('systemd-sysext') (unit systemd-sysext.service)... Nov 4 23:54:34.705802 systemd[1]: Reloading... Nov 4 23:54:34.764575 systemd-resolved[1285]: Positive Trust Anchors: Nov 4 23:54:34.765643 systemd-resolved[1285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 23:54:34.765653 systemd-resolved[1285]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 23:54:34.765693 systemd-resolved[1285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 23:54:34.792604 systemd-resolved[1285]: Using system hostname 'ci-4487.0.0-n-936e1cfeba'. Nov 4 23:54:34.819311 zram_generator::config[1327]: No configuration found. Nov 4 23:54:35.082822 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 4 23:54:35.083083 systemd[1]: Reloading finished in 376 ms. Nov 4 23:54:35.101436 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 23:54:35.102561 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 4 23:54:35.107063 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:54:35.116491 systemd[1]: Starting ensure-sysext.service... Nov 4 23:54:35.120540 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 23:54:35.149498 systemd[1]: Reload requested from client PID 1369 ('systemctl') (unit ensure-sysext.service)... Nov 4 23:54:35.149522 systemd[1]: Reloading... Nov 4 23:54:35.175045 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 4 23:54:35.175089 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 4 23:54:35.175506 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 4 23:54:35.175918 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 4 23:54:35.180809 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 4 23:54:35.181266 systemd-tmpfiles[1370]: ACLs are not supported, ignoring. Nov 4 23:54:35.183097 systemd-tmpfiles[1370]: ACLs are not supported, ignoring. Nov 4 23:54:35.193870 systemd-tmpfiles[1370]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 23:54:35.194027 systemd-tmpfiles[1370]: Skipping /boot Nov 4 23:54:35.212902 systemd-tmpfiles[1370]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 23:54:35.215341 systemd-tmpfiles[1370]: Skipping /boot Nov 4 23:54:35.304336 zram_generator::config[1406]: No configuration found. Nov 4 23:54:35.533216 systemd[1]: Reloading finished in 383 ms. Nov 4 23:54:35.559610 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 4 23:54:35.574442 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:54:35.587313 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 23:54:35.589746 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 4 23:54:35.595629 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 4 23:54:35.606304 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 4 23:54:35.610011 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:54:35.614610 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 4 23:54:35.617555 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:54:35.617736 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:54:35.620350 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:54:35.629835 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:54:35.653042 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:54:35.653774 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:54:35.653894 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:54:35.653989 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:54:35.660669 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:54:35.663371 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:54:35.663587 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:54:35.663673 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:54:35.663763 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:54:35.668986 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:54:35.670730 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:54:35.680700 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 23:54:35.681998 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:54:35.682141 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:54:35.682291 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:54:35.690338 systemd[1]: Finished ensure-sysext.service. Nov 4 23:54:35.702444 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 4 23:54:35.708835 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 4 23:54:35.723802 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:54:35.726611 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:54:35.742728 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:54:35.743104 systemd-udevd[1449]: Using default interface naming scheme 'v257'. Nov 4 23:54:35.743817 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:54:35.744574 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 23:54:35.752493 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:54:35.753547 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:54:35.754901 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:54:35.758379 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 23:54:35.758633 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 23:54:35.782479 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:54:35.789953 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 23:54:35.790771 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 4 23:54:35.794133 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 4 23:54:35.800637 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 4 23:54:35.862820 augenrules[1501]: No rules Nov 4 23:54:35.866354 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 23:54:35.866569 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 23:54:35.960668 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 4 23:54:35.961617 systemd[1]: Reached target time-set.target - System Time Set. Nov 4 23:54:36.009107 systemd-networkd[1483]: lo: Link UP Nov 4 23:54:36.009117 systemd-networkd[1483]: lo: Gained carrier Nov 4 23:54:36.012447 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 23:54:36.013893 systemd[1]: Reached target network.target - Network. Nov 4 23:54:36.018448 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 4 23:54:36.024953 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 4 23:54:36.043240 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Nov 4 23:54:36.048599 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Nov 4 23:54:36.049825 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:54:36.049978 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:54:36.054557 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:54:36.058176 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:54:36.066923 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:54:36.067692 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:54:36.067790 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:54:36.067827 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 4 23:54:36.067846 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:54:36.123315 kernel: ISO 9660 Extensions: RRIP_1991A Nov 4 23:54:36.128565 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Nov 4 23:54:36.136804 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 4 23:54:36.168484 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:54:36.169040 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:54:36.171193 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:54:36.172257 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:54:36.173925 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:54:36.175518 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:54:36.178729 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:54:36.178838 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 23:54:36.191089 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 4 23:54:36.215868 systemd-networkd[1483]: eth1: Configuring with /run/systemd/network/10-66:08:c4:07:d4:db.network. Nov 4 23:54:36.228208 systemd-networkd[1483]: eth1: Link UP Nov 4 23:54:36.230224 systemd-networkd[1483]: eth1: Gained carrier Nov 4 23:54:36.236977 systemd-timesyncd[1462]: Network configuration changed, trying to establish connection. Nov 4 23:54:36.256018 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 23:54:36.263528 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 4 23:54:36.316385 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 4 23:54:36.338480 kernel: mousedev: PS/2 mouse device common for all mice Nov 4 23:54:36.366862 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 4 23:54:36.361796 systemd-networkd[1483]: eth0: Configuring with /run/systemd/network/10-ae:53:22:50:1b:ef.network. Nov 4 23:54:36.368668 systemd-networkd[1483]: eth0: Link UP Nov 4 23:54:36.370036 systemd-networkd[1483]: eth0: Gained carrier Nov 4 23:54:36.370358 kernel: ACPI: button: Power Button [PWRF] Nov 4 23:54:36.370431 systemd-timesyncd[1462]: Network configuration changed, trying to establish connection. Nov 4 23:54:36.375129 systemd-timesyncd[1462]: Network configuration changed, trying to establish connection. Nov 4 23:54:36.376030 systemd-timesyncd[1462]: Network configuration changed, trying to establish connection. Nov 4 23:54:36.409315 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 4 23:54:36.416326 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 4 23:54:36.420179 ldconfig[1447]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 4 23:54:36.426195 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 4 23:54:36.433554 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 4 23:54:36.475053 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 4 23:54:36.476200 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 23:54:36.476992 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 4 23:54:36.478933 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 4 23:54:36.479607 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 4 23:54:36.481529 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 4 23:54:36.483353 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 4 23:54:36.484018 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 4 23:54:36.484671 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 4 23:54:36.484730 systemd[1]: Reached target paths.target - Path Units. Nov 4 23:54:36.485291 systemd[1]: Reached target timers.target - Timer Units. Nov 4 23:54:36.487540 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 4 23:54:36.491192 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 4 23:54:36.498017 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 4 23:54:36.500510 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 4 23:54:36.502405 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 4 23:54:36.512576 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 4 23:54:36.516193 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 4 23:54:36.518175 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 4 23:54:36.521137 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 23:54:36.522349 systemd[1]: Reached target basic.target - Basic System. Nov 4 23:54:36.522819 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 4 23:54:36.522848 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 4 23:54:36.526733 systemd[1]: Starting containerd.service - containerd container runtime... Nov 4 23:54:36.530497 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 4 23:54:36.533583 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 4 23:54:36.537531 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 4 23:54:36.544344 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 4 23:54:36.550657 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 4 23:54:36.552394 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 4 23:54:36.560802 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 4 23:54:36.572364 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 4 23:54:36.575380 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 4 23:54:36.578535 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 4 23:54:36.586847 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 4 23:54:36.606530 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 4 23:54:36.608388 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 4 23:54:36.609013 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 4 23:54:36.611543 systemd[1]: Starting update-engine.service - Update Engine... Nov 4 23:54:36.616107 jq[1562]: false Nov 4 23:54:36.617534 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 4 23:54:36.623880 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 4 23:54:36.625086 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 4 23:54:36.625436 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 4 23:54:36.627892 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 4 23:54:36.628114 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 4 23:54:36.641308 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Refreshing passwd entry cache Nov 4 23:54:36.640833 oslogin_cache_refresh[1564]: Refreshing passwd entry cache Nov 4 23:54:36.668611 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Failure getting users, quitting Nov 4 23:54:36.668611 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 23:54:36.668569 oslogin_cache_refresh[1564]: Failure getting users, quitting Nov 4 23:54:36.668838 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Refreshing group entry cache Nov 4 23:54:36.668592 oslogin_cache_refresh[1564]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 23:54:36.668648 oslogin_cache_refresh[1564]: Refreshing group entry cache Nov 4 23:54:36.671735 extend-filesystems[1563]: Found /dev/vda6 Nov 4 23:54:36.672717 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Failure getting groups, quitting Nov 4 23:54:36.672717 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 23:54:36.672420 oslogin_cache_refresh[1564]: Failure getting groups, quitting Nov 4 23:54:36.672433 oslogin_cache_refresh[1564]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 23:54:36.687111 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 4 23:54:36.687422 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 4 23:54:36.689270 jq[1576]: true Nov 4 23:54:36.696061 extend-filesystems[1563]: Found /dev/vda9 Nov 4 23:54:36.709674 update_engine[1575]: I20251104 23:54:36.706842 1575 main.cc:92] Flatcar Update Engine starting Nov 4 23:54:36.725009 extend-filesystems[1563]: Checking size of /dev/vda9 Nov 4 23:54:36.727115 tar[1578]: linux-amd64/LICENSE Nov 4 23:54:36.727115 tar[1578]: linux-amd64/helm Nov 4 23:54:36.740505 dbus-daemon[1560]: [system] SELinux support is enabled Nov 4 23:54:36.740854 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 4 23:54:36.747439 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 4 23:54:36.747473 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 4 23:54:36.748015 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 4 23:54:36.748089 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Nov 4 23:54:36.748103 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 4 23:54:36.751935 systemd[1]: Started update-engine.service - Update Engine. Nov 4 23:54:36.754197 coreos-metadata[1559]: Nov 04 23:54:36.753 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 4 23:54:36.756764 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 4 23:54:36.759780 update_engine[1575]: I20251104 23:54:36.759713 1575 update_check_scheduler.cc:74] Next update check in 7m57s Nov 4 23:54:36.778653 coreos-metadata[1559]: Nov 04 23:54:36.778 INFO Fetch successful Nov 4 23:54:36.782196 jq[1604]: true Nov 4 23:54:36.782607 systemd[1]: motdgen.service: Deactivated successfully. Nov 4 23:54:36.784673 (ntainerd)[1607]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 4 23:54:36.786468 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 4 23:54:36.832041 extend-filesystems[1563]: Resized partition /dev/vda9 Nov 4 23:54:36.835301 extend-filesystems[1618]: resize2fs 1.47.3 (8-Jul-2025) Nov 4 23:54:36.858311 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 14138363 blocks Nov 4 23:54:36.942145 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 4 23:54:36.943656 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 4 23:54:36.999334 bash[1636]: Updated "/home/core/.ssh/authorized_keys" Nov 4 23:54:37.001358 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 4 23:54:37.008680 systemd[1]: Starting sshkeys.service... Nov 4 23:54:37.024313 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Nov 4 23:54:37.038905 extend-filesystems[1618]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 4 23:54:37.038905 extend-filesystems[1618]: old_desc_blocks = 1, new_desc_blocks = 7 Nov 4 23:54:37.038905 extend-filesystems[1618]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Nov 4 23:54:37.059326 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Nov 4 23:54:37.065436 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Nov 4 23:54:37.065919 extend-filesystems[1563]: Resized filesystem in /dev/vda9 Nov 4 23:54:37.080874 kernel: Console: switching to colour dummy device 80x25 Nov 4 23:54:37.067999 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 4 23:54:37.068835 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 4 23:54:37.085333 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 4 23:54:37.085448 kernel: [drm] features: -context_init Nov 4 23:54:37.110320 kernel: [drm] number of scanouts: 1 Nov 4 23:54:37.191689 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 4 23:54:37.198694 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 4 23:54:37.223316 kernel: [drm] number of cap sets: 0 Nov 4 23:54:37.225333 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Nov 4 23:54:37.228311 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 4 23:54:37.230047 kernel: Console: switching to colour frame buffer device 128x48 Nov 4 23:54:37.234352 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:54:37.241340 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 4 23:54:37.383354 coreos-metadata[1644]: Nov 04 23:54:37.383 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 4 23:54:37.399309 coreos-metadata[1644]: Nov 04 23:54:37.397 INFO Fetch successful Nov 4 23:54:37.417042 unknown[1644]: wrote ssh authorized keys file for user: core Nov 4 23:54:37.422548 locksmithd[1610]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 4 23:54:37.444973 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:54:37.452204 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:54:37.459969 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:54:37.477586 update-ssh-keys[1661]: Updated "/home/core/.ssh/authorized_keys" Nov 4 23:54:37.480399 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 4 23:54:37.486994 sshd_keygen[1595]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 4 23:54:37.487385 systemd[1]: Finished sshkeys.service. Nov 4 23:54:37.545906 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:54:37.546441 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:54:37.559732 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:54:37.596776 systemd-logind[1574]: New seat seat0. Nov 4 23:54:37.599423 systemd-logind[1574]: Watching system buttons on /dev/input/event2 (Power Button) Nov 4 23:54:37.599464 systemd-logind[1574]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 4 23:54:37.599781 systemd[1]: Started systemd-logind.service - User Login Management. Nov 4 23:54:37.657973 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 4 23:54:37.662018 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 4 23:54:37.708233 systemd[1]: issuegen.service: Deactivated successfully. Nov 4 23:54:37.710019 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 4 23:54:37.714307 kernel: EDAC MC: Ver: 3.0.0 Nov 4 23:54:37.716113 systemd-networkd[1483]: eth0: Gained IPv6LL Nov 4 23:54:37.716829 systemd-timesyncd[1462]: Network configuration changed, trying to establish connection. Nov 4 23:54:37.717674 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 4 23:54:37.726442 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 4 23:54:37.729749 systemd[1]: Reached target network-online.target - Network is Online. Nov 4 23:54:37.737769 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:54:37.741804 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 4 23:54:37.781317 containerd[1607]: time="2025-11-04T23:54:37Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 4 23:54:37.781317 containerd[1607]: time="2025-11-04T23:54:37.780364014Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 4 23:54:37.812888 containerd[1607]: time="2025-11-04T23:54:37.805742171Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.99µs" Nov 4 23:54:37.812888 containerd[1607]: time="2025-11-04T23:54:37.805789964Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 4 23:54:37.812888 containerd[1607]: time="2025-11-04T23:54:37.805816471Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 4 23:54:37.812888 containerd[1607]: time="2025-11-04T23:54:37.806020664Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 4 23:54:37.812888 containerd[1607]: time="2025-11-04T23:54:37.806039713Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 4 23:54:37.812888 containerd[1607]: time="2025-11-04T23:54:37.806088509Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 23:54:37.812888 containerd[1607]: time="2025-11-04T23:54:37.806163631Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 23:54:37.812888 containerd[1607]: time="2025-11-04T23:54:37.806177908Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 23:54:37.812888 containerd[1607]: time="2025-11-04T23:54:37.808462423Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 23:54:37.812888 containerd[1607]: time="2025-11-04T23:54:37.808497473Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 23:54:37.812888 containerd[1607]: time="2025-11-04T23:54:37.808516829Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 23:54:37.812888 containerd[1607]: time="2025-11-04T23:54:37.808528757Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 4 23:54:37.813543 containerd[1607]: time="2025-11-04T23:54:37.808680211Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 4 23:54:37.813543 containerd[1607]: time="2025-11-04T23:54:37.808958781Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 23:54:37.813543 containerd[1607]: time="2025-11-04T23:54:37.809001968Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 23:54:37.813543 containerd[1607]: time="2025-11-04T23:54:37.809019016Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 4 23:54:37.813543 containerd[1607]: time="2025-11-04T23:54:37.809069477Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 4 23:54:37.813543 containerd[1607]: time="2025-11-04T23:54:37.809389434Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 4 23:54:37.813543 containerd[1607]: time="2025-11-04T23:54:37.809490364Z" level=info msg="metadata content store policy set" policy=shared Nov 4 23:54:37.813543 containerd[1607]: time="2025-11-04T23:54:37.812319945Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 4 23:54:37.813543 containerd[1607]: time="2025-11-04T23:54:37.812377635Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 4 23:54:37.813543 containerd[1607]: time="2025-11-04T23:54:37.812409642Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 4 23:54:37.813543 containerd[1607]: time="2025-11-04T23:54:37.812431235Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 4 23:54:37.813543 containerd[1607]: time="2025-11-04T23:54:37.812459092Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 4 23:54:37.813543 containerd[1607]: time="2025-11-04T23:54:37.812477424Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 4 23:54:37.813543 containerd[1607]: time="2025-11-04T23:54:37.812496027Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 4 23:54:37.818337 containerd[1607]: time="2025-11-04T23:54:37.812523187Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 4 23:54:37.818337 containerd[1607]: time="2025-11-04T23:54:37.812544820Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 4 23:54:37.818337 containerd[1607]: time="2025-11-04T23:54:37.812568786Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 4 23:54:37.818337 containerd[1607]: time="2025-11-04T23:54:37.812586379Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 4 23:54:37.818337 containerd[1607]: time="2025-11-04T23:54:37.812606628Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 4 23:54:37.818337 containerd[1607]: time="2025-11-04T23:54:37.812747704Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 4 23:54:37.818337 containerd[1607]: time="2025-11-04T23:54:37.812772609Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 4 23:54:37.818337 containerd[1607]: time="2025-11-04T23:54:37.812794075Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 4 23:54:37.818337 containerd[1607]: time="2025-11-04T23:54:37.812826023Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 4 23:54:37.818337 containerd[1607]: time="2025-11-04T23:54:37.812844514Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 4 23:54:37.818337 containerd[1607]: time="2025-11-04T23:54:37.812879221Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 4 23:54:37.818337 containerd[1607]: time="2025-11-04T23:54:37.812899078Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 4 23:54:37.818337 containerd[1607]: time="2025-11-04T23:54:37.812913860Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 4 23:54:37.818337 containerd[1607]: time="2025-11-04T23:54:37.812929908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 4 23:54:37.818337 containerd[1607]: time="2025-11-04T23:54:37.812945336Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 4 23:54:37.818978 containerd[1607]: time="2025-11-04T23:54:37.812960600Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 4 23:54:37.818978 containerd[1607]: time="2025-11-04T23:54:37.813036056Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 4 23:54:37.818978 containerd[1607]: time="2025-11-04T23:54:37.813053606Z" level=info msg="Start snapshots syncer" Nov 4 23:54:37.818978 containerd[1607]: time="2025-11-04T23:54:37.813084325Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 4 23:54:37.819133 containerd[1607]: time="2025-11-04T23:54:37.813444441Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 4 23:54:37.819133 containerd[1607]: time="2025-11-04T23:54:37.813514589Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 4 23:54:37.819133 containerd[1607]: time="2025-11-04T23:54:37.813650969Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 4 23:54:37.819133 containerd[1607]: time="2025-11-04T23:54:37.813782491Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 4 23:54:37.819133 containerd[1607]: time="2025-11-04T23:54:37.813809879Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 4 23:54:37.819133 containerd[1607]: time="2025-11-04T23:54:37.813826490Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 4 23:54:37.819133 containerd[1607]: time="2025-11-04T23:54:37.813844288Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 4 23:54:37.819133 containerd[1607]: time="2025-11-04T23:54:37.813864012Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 4 23:54:37.819133 containerd[1607]: time="2025-11-04T23:54:37.813879754Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 4 23:54:37.819133 containerd[1607]: time="2025-11-04T23:54:37.813896627Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 4 23:54:37.819133 containerd[1607]: time="2025-11-04T23:54:37.813927997Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 4 23:54:37.819133 containerd[1607]: time="2025-11-04T23:54:37.813944804Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 4 23:54:37.819133 containerd[1607]: time="2025-11-04T23:54:37.813959798Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 4 23:54:37.819133 containerd[1607]: time="2025-11-04T23:54:37.814009733Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 23:54:37.819133 containerd[1607]: time="2025-11-04T23:54:37.814036114Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 23:54:37.819133 containerd[1607]: time="2025-11-04T23:54:37.814050000Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 23:54:37.819133 containerd[1607]: time="2025-11-04T23:54:37.814064026Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 23:54:37.819133 containerd[1607]: time="2025-11-04T23:54:37.814075891Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 4 23:54:37.819133 containerd[1607]: time="2025-11-04T23:54:37.814089876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 4 23:54:37.819133 containerd[1607]: time="2025-11-04T23:54:37.814105876Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 4 23:54:37.819133 containerd[1607]: time="2025-11-04T23:54:37.814131775Z" level=info msg="runtime interface created" Nov 4 23:54:37.819133 containerd[1607]: time="2025-11-04T23:54:37.814140243Z" level=info msg="created NRI interface" Nov 4 23:54:37.819133 containerd[1607]: time="2025-11-04T23:54:37.814151916Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 4 23:54:37.819133 containerd[1607]: time="2025-11-04T23:54:37.814168831Z" level=info msg="Connect containerd service" Nov 4 23:54:37.819133 containerd[1607]: time="2025-11-04T23:54:37.814227327Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 4 23:54:37.835378 containerd[1607]: time="2025-11-04T23:54:37.817119356Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 23:54:37.868379 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:54:37.874554 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 4 23:54:37.878972 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 4 23:54:37.886792 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 4 23:54:37.899491 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 4 23:54:37.900516 systemd[1]: Reached target getty.target - Login Prompts. Nov 4 23:54:37.973666 systemd-networkd[1483]: eth1: Gained IPv6LL Nov 4 23:54:37.974684 systemd-timesyncd[1462]: Network configuration changed, trying to establish connection. Nov 4 23:54:38.054133 containerd[1607]: time="2025-11-04T23:54:38.054082792Z" level=info msg="Start subscribing containerd event" Nov 4 23:54:38.054406 containerd[1607]: time="2025-11-04T23:54:38.054333961Z" level=info msg="Start recovering state" Nov 4 23:54:38.054516 containerd[1607]: time="2025-11-04T23:54:38.054492140Z" level=info msg="Start event monitor" Nov 4 23:54:38.054557 containerd[1607]: time="2025-11-04T23:54:38.054520182Z" level=info msg="Start cni network conf syncer for default" Nov 4 23:54:38.054557 containerd[1607]: time="2025-11-04T23:54:38.054531229Z" level=info msg="Start streaming server" Nov 4 23:54:38.054557 containerd[1607]: time="2025-11-04T23:54:38.054545685Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 4 23:54:38.054640 containerd[1607]: time="2025-11-04T23:54:38.054555942Z" level=info msg="runtime interface starting up..." Nov 4 23:54:38.054640 containerd[1607]: time="2025-11-04T23:54:38.054564221Z" level=info msg="starting plugins..." Nov 4 23:54:38.054640 containerd[1607]: time="2025-11-04T23:54:38.054582277Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 4 23:54:38.059488 containerd[1607]: time="2025-11-04T23:54:38.059334964Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 4 23:54:38.059488 containerd[1607]: time="2025-11-04T23:54:38.059428189Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 4 23:54:38.059663 containerd[1607]: time="2025-11-04T23:54:38.059651245Z" level=info msg="containerd successfully booted in 0.283855s" Nov 4 23:54:38.060584 systemd[1]: Started containerd.service - containerd container runtime. Nov 4 23:54:38.193722 tar[1578]: linux-amd64/README.md Nov 4 23:54:38.215992 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 4 23:54:39.136888 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:54:39.139420 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 4 23:54:39.142284 systemd[1]: Startup finished in 2.564s (kernel) + 5.381s (initrd) + 6.035s (userspace) = 13.981s. Nov 4 23:54:39.147079 (kubelet)[1727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:54:39.841216 kubelet[1727]: E1104 23:54:39.841155 1727 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:54:39.844816 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:54:39.844972 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:54:39.845345 systemd[1]: kubelet.service: Consumed 1.348s CPU time, 268.8M memory peak. Nov 4 23:54:39.978483 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 4 23:54:39.980504 systemd[1]: Started sshd@0-64.227.96.36:22-139.178.89.65:55832.service - OpenSSH per-connection server daemon (139.178.89.65:55832). Nov 4 23:54:40.079110 sshd[1740]: Accepted publickey for core from 139.178.89.65 port 55832 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:54:40.081109 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:54:40.094055 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 4 23:54:40.096440 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 4 23:54:40.100598 systemd-logind[1574]: New session 1 of user core. Nov 4 23:54:40.132666 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 4 23:54:40.136808 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 4 23:54:40.155095 (systemd)[1745]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 4 23:54:40.158773 systemd-logind[1574]: New session c1 of user core. Nov 4 23:54:40.356243 systemd[1745]: Queued start job for default target default.target. Nov 4 23:54:40.365079 systemd[1745]: Created slice app.slice - User Application Slice. Nov 4 23:54:40.365134 systemd[1745]: Reached target paths.target - Paths. Nov 4 23:54:40.365204 systemd[1745]: Reached target timers.target - Timers. Nov 4 23:54:40.367317 systemd[1745]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 4 23:54:40.393805 systemd[1745]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 4 23:54:40.393905 systemd[1745]: Reached target sockets.target - Sockets. Nov 4 23:54:40.393988 systemd[1745]: Reached target basic.target - Basic System. Nov 4 23:54:40.394045 systemd[1745]: Reached target default.target - Main User Target. Nov 4 23:54:40.394090 systemd[1745]: Startup finished in 225ms. Nov 4 23:54:40.394608 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 4 23:54:40.404627 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 4 23:54:40.477462 systemd[1]: Started sshd@1-64.227.96.36:22-139.178.89.65:55842.service - OpenSSH per-connection server daemon (139.178.89.65:55842). Nov 4 23:54:40.558730 sshd[1756]: Accepted publickey for core from 139.178.89.65 port 55842 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:54:40.560651 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:54:40.566923 systemd-logind[1574]: New session 2 of user core. Nov 4 23:54:40.579628 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 4 23:54:40.643393 sshd[1759]: Connection closed by 139.178.89.65 port 55842 Nov 4 23:54:40.643258 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Nov 4 23:54:40.659684 systemd[1]: sshd@1-64.227.96.36:22-139.178.89.65:55842.service: Deactivated successfully. Nov 4 23:54:40.661582 systemd[1]: session-2.scope: Deactivated successfully. Nov 4 23:54:40.662462 systemd-logind[1574]: Session 2 logged out. Waiting for processes to exit. Nov 4 23:54:40.665718 systemd[1]: Started sshd@2-64.227.96.36:22-139.178.89.65:55856.service - OpenSSH per-connection server daemon (139.178.89.65:55856). Nov 4 23:54:40.666426 systemd-logind[1574]: Removed session 2. Nov 4 23:54:40.740994 sshd[1765]: Accepted publickey for core from 139.178.89.65 port 55856 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:54:40.742761 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:54:40.750352 systemd-logind[1574]: New session 3 of user core. Nov 4 23:54:40.759607 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 4 23:54:40.817055 sshd[1768]: Connection closed by 139.178.89.65 port 55856 Nov 4 23:54:40.817699 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Nov 4 23:54:40.828773 systemd[1]: sshd@2-64.227.96.36:22-139.178.89.65:55856.service: Deactivated successfully. Nov 4 23:54:40.831376 systemd[1]: session-3.scope: Deactivated successfully. Nov 4 23:54:40.833172 systemd-logind[1574]: Session 3 logged out. Waiting for processes to exit. Nov 4 23:54:40.834854 systemd-logind[1574]: Removed session 3. Nov 4 23:54:40.836208 systemd[1]: Started sshd@3-64.227.96.36:22-139.178.89.65:55858.service - OpenSSH per-connection server daemon (139.178.89.65:55858). Nov 4 23:54:40.898933 sshd[1774]: Accepted publickey for core from 139.178.89.65 port 55858 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:54:40.899960 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:54:40.905240 systemd-logind[1574]: New session 4 of user core. Nov 4 23:54:40.920607 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 4 23:54:40.985458 sshd[1777]: Connection closed by 139.178.89.65 port 55858 Nov 4 23:54:40.986218 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Nov 4 23:54:40.998598 systemd[1]: sshd@3-64.227.96.36:22-139.178.89.65:55858.service: Deactivated successfully. Nov 4 23:54:41.000586 systemd[1]: session-4.scope: Deactivated successfully. Nov 4 23:54:41.001775 systemd-logind[1574]: Session 4 logged out. Waiting for processes to exit. Nov 4 23:54:41.005321 systemd[1]: Started sshd@4-64.227.96.36:22-139.178.89.65:55868.service - OpenSSH per-connection server daemon (139.178.89.65:55868). Nov 4 23:54:41.006399 systemd-logind[1574]: Removed session 4. Nov 4 23:54:41.069416 sshd[1783]: Accepted publickey for core from 139.178.89.65 port 55868 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:54:41.071145 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:54:41.078066 systemd-logind[1574]: New session 5 of user core. Nov 4 23:54:41.090657 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 4 23:54:41.164436 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 4 23:54:41.164851 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:54:41.181645 sudo[1787]: pam_unix(sudo:session): session closed for user root Nov 4 23:54:41.187317 sshd[1786]: Connection closed by 139.178.89.65 port 55868 Nov 4 23:54:41.186035 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Nov 4 23:54:41.199126 systemd[1]: sshd@4-64.227.96.36:22-139.178.89.65:55868.service: Deactivated successfully. Nov 4 23:54:41.201376 systemd[1]: session-5.scope: Deactivated successfully. Nov 4 23:54:41.203356 systemd-logind[1574]: Session 5 logged out. Waiting for processes to exit. Nov 4 23:54:41.206779 systemd[1]: Started sshd@5-64.227.96.36:22-139.178.89.65:55878.service - OpenSSH per-connection server daemon (139.178.89.65:55878). Nov 4 23:54:41.208681 systemd-logind[1574]: Removed session 5. Nov 4 23:54:41.282919 sshd[1793]: Accepted publickey for core from 139.178.89.65 port 55878 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:54:41.284575 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:54:41.289968 systemd-logind[1574]: New session 6 of user core. Nov 4 23:54:41.297563 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 4 23:54:41.361605 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 4 23:54:41.361932 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:54:41.368389 sudo[1798]: pam_unix(sudo:session): session closed for user root Nov 4 23:54:41.377884 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 4 23:54:41.378923 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:54:41.390515 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 23:54:41.450596 augenrules[1820]: No rules Nov 4 23:54:41.451681 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 23:54:41.451895 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 23:54:41.453803 sudo[1797]: pam_unix(sudo:session): session closed for user root Nov 4 23:54:41.458355 sshd[1796]: Connection closed by 139.178.89.65 port 55878 Nov 4 23:54:41.458224 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Nov 4 23:54:41.467866 systemd[1]: sshd@5-64.227.96.36:22-139.178.89.65:55878.service: Deactivated successfully. Nov 4 23:54:41.469748 systemd[1]: session-6.scope: Deactivated successfully. Nov 4 23:54:41.470563 systemd-logind[1574]: Session 6 logged out. Waiting for processes to exit. Nov 4 23:54:41.473784 systemd[1]: Started sshd@6-64.227.96.36:22-139.178.89.65:55890.service - OpenSSH per-connection server daemon (139.178.89.65:55890). Nov 4 23:54:41.474621 systemd-logind[1574]: Removed session 6. Nov 4 23:54:41.535629 sshd[1829]: Accepted publickey for core from 139.178.89.65 port 55890 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:54:41.537132 sshd-session[1829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:54:41.543360 systemd-logind[1574]: New session 7 of user core. Nov 4 23:54:41.549654 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 4 23:54:41.610311 sudo[1833]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 4 23:54:41.610658 sudo[1833]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:54:42.173510 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 4 23:54:42.186878 (dockerd)[1850]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 4 23:54:42.546084 dockerd[1850]: time="2025-11-04T23:54:42.545953101Z" level=info msg="Starting up" Nov 4 23:54:42.551659 dockerd[1850]: time="2025-11-04T23:54:42.551610732Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 4 23:54:42.569680 dockerd[1850]: time="2025-11-04T23:54:42.569575547Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 4 23:54:42.586003 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport438626381-merged.mount: Deactivated successfully. Nov 4 23:54:42.640334 systemd[1]: var-lib-docker-metacopy\x2dcheck1174485422-merged.mount: Deactivated successfully. Nov 4 23:54:42.658675 dockerd[1850]: time="2025-11-04T23:54:42.658411043Z" level=info msg="Loading containers: start." Nov 4 23:54:42.672322 kernel: Initializing XFRM netlink socket Nov 4 23:54:42.913066 systemd-timesyncd[1462]: Network configuration changed, trying to establish connection. Nov 4 23:54:42.915537 systemd-timesyncd[1462]: Network configuration changed, trying to establish connection. Nov 4 23:54:42.929519 systemd-timesyncd[1462]: Network configuration changed, trying to establish connection. Nov 4 23:54:42.969780 systemd-networkd[1483]: docker0: Link UP Nov 4 23:54:42.970479 systemd-timesyncd[1462]: Network configuration changed, trying to establish connection. Nov 4 23:54:42.973868 dockerd[1850]: time="2025-11-04T23:54:42.973820133Z" level=info msg="Loading containers: done." Nov 4 23:54:42.993397 dockerd[1850]: time="2025-11-04T23:54:42.991955397Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 4 23:54:42.993397 dockerd[1850]: time="2025-11-04T23:54:42.992073286Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 4 23:54:42.993397 dockerd[1850]: time="2025-11-04T23:54:42.992242184Z" level=info msg="Initializing buildkit" Nov 4 23:54:43.020675 dockerd[1850]: time="2025-11-04T23:54:43.020592861Z" level=info msg="Completed buildkit initialization" Nov 4 23:54:43.033011 dockerd[1850]: time="2025-11-04T23:54:43.032925689Z" level=info msg="Daemon has completed initialization" Nov 4 23:54:43.033189 dockerd[1850]: time="2025-11-04T23:54:43.033118996Z" level=info msg="API listen on /run/docker.sock" Nov 4 23:54:43.034720 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 4 23:54:43.581680 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1278472848-merged.mount: Deactivated successfully. Nov 4 23:54:43.957631 containerd[1607]: time="2025-11-04T23:54:43.957179704Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 4 23:54:44.501490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1501934290.mount: Deactivated successfully. Nov 4 23:54:45.832766 containerd[1607]: time="2025-11-04T23:54:45.832689786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:45.833759 containerd[1607]: time="2025-11-04T23:54:45.833724271Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Nov 4 23:54:45.834956 containerd[1607]: time="2025-11-04T23:54:45.834919353Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:45.838629 containerd[1607]: time="2025-11-04T23:54:45.838582006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:45.840022 containerd[1607]: time="2025-11-04T23:54:45.839971241Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.88273317s" Nov 4 23:54:45.840350 containerd[1607]: time="2025-11-04T23:54:45.840317563Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 4 23:54:45.841225 containerd[1607]: time="2025-11-04T23:54:45.841181946Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 4 23:54:47.504940 containerd[1607]: time="2025-11-04T23:54:47.503769197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:47.504940 containerd[1607]: time="2025-11-04T23:54:47.504648205Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Nov 4 23:54:47.505616 containerd[1607]: time="2025-11-04T23:54:47.505411303Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:47.509896 containerd[1607]: time="2025-11-04T23:54:47.509832008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:47.511217 containerd[1607]: time="2025-11-04T23:54:47.511162165Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.66993397s" Nov 4 23:54:47.511217 containerd[1607]: time="2025-11-04T23:54:47.511209680Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 4 23:54:47.512100 containerd[1607]: time="2025-11-04T23:54:47.512065648Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 4 23:54:48.724036 containerd[1607]: time="2025-11-04T23:54:48.723968188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:48.725302 containerd[1607]: time="2025-11-04T23:54:48.724720697Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Nov 4 23:54:48.726348 containerd[1607]: time="2025-11-04T23:54:48.726190249Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:48.729402 containerd[1607]: time="2025-11-04T23:54:48.729360363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:48.730508 containerd[1607]: time="2025-11-04T23:54:48.730268785Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.217880212s" Nov 4 23:54:48.730508 containerd[1607]: time="2025-11-04T23:54:48.730376048Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 4 23:54:48.730987 containerd[1607]: time="2025-11-04T23:54:48.730959102Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 4 23:54:49.798897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3983931990.mount: Deactivated successfully. Nov 4 23:54:49.871823 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 4 23:54:49.875390 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:54:50.074531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:54:50.090160 (kubelet)[2153]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:54:50.168446 kubelet[2153]: E1104 23:54:50.168375 2153 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:54:50.174652 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:54:50.174859 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:54:50.176569 systemd[1]: kubelet.service: Consumed 222ms CPU time, 109.9M memory peak. Nov 4 23:54:50.575931 containerd[1607]: time="2025-11-04T23:54:50.575863345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:50.576824 containerd[1607]: time="2025-11-04T23:54:50.576784040Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Nov 4 23:54:50.577565 containerd[1607]: time="2025-11-04T23:54:50.577395591Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:50.579726 containerd[1607]: time="2025-11-04T23:54:50.579684923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:50.580300 containerd[1607]: time="2025-11-04T23:54:50.580254007Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.84917289s" Nov 4 23:54:50.580300 containerd[1607]: time="2025-11-04T23:54:50.580303214Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 4 23:54:50.581248 containerd[1607]: time="2025-11-04T23:54:50.580756140Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 4 23:54:50.581844 systemd-resolved[1285]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Nov 4 23:54:51.250739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1760868810.mount: Deactivated successfully. Nov 4 23:54:52.204492 containerd[1607]: time="2025-11-04T23:54:52.204425189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:52.206439 containerd[1607]: time="2025-11-04T23:54:52.206393093Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 4 23:54:52.208313 containerd[1607]: time="2025-11-04T23:54:52.207406112Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:52.210456 containerd[1607]: time="2025-11-04T23:54:52.210416988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:52.212021 containerd[1607]: time="2025-11-04T23:54:52.211968419Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.631183642s" Nov 4 23:54:52.212021 containerd[1607]: time="2025-11-04T23:54:52.212022364Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 4 23:54:52.212668 containerd[1607]: time="2025-11-04T23:54:52.212615915Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 4 23:54:52.628603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1122974350.mount: Deactivated successfully. Nov 4 23:54:52.633749 containerd[1607]: time="2025-11-04T23:54:52.633685080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:54:52.634401 containerd[1607]: time="2025-11-04T23:54:52.634365766Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 4 23:54:52.635306 containerd[1607]: time="2025-11-04T23:54:52.634849773Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:54:52.636655 containerd[1607]: time="2025-11-04T23:54:52.636625453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:54:52.637429 containerd[1607]: time="2025-11-04T23:54:52.637400663Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 424.596756ms" Nov 4 23:54:52.637534 containerd[1607]: time="2025-11-04T23:54:52.637520695Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 4 23:54:52.638214 containerd[1607]: time="2025-11-04T23:54:52.638171890Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 4 23:54:53.106139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount456184275.mount: Deactivated successfully. Nov 4 23:54:53.650495 systemd-resolved[1285]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Nov 4 23:54:56.743306 containerd[1607]: time="2025-11-04T23:54:56.741873975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:56.743306 containerd[1607]: time="2025-11-04T23:54:56.742928165Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Nov 4 23:54:56.743878 containerd[1607]: time="2025-11-04T23:54:56.743839119Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:56.746907 containerd[1607]: time="2025-11-04T23:54:56.746865182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:56.748268 containerd[1607]: time="2025-11-04T23:54:56.748220926Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.109995572s" Nov 4 23:54:56.748268 containerd[1607]: time="2025-11-04T23:54:56.748271070Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 4 23:54:59.830470 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:54:59.831208 systemd[1]: kubelet.service: Consumed 222ms CPU time, 109.9M memory peak. Nov 4 23:54:59.835156 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:54:59.875512 systemd[1]: Reload requested from client PID 2299 ('systemctl') (unit session-7.scope)... Nov 4 23:54:59.875702 systemd[1]: Reloading... Nov 4 23:55:00.020424 zram_generator::config[2343]: No configuration found. Nov 4 23:55:00.323709 systemd[1]: Reloading finished in 447 ms. Nov 4 23:55:00.382515 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 4 23:55:00.382606 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 4 23:55:00.383191 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:55:00.383249 systemd[1]: kubelet.service: Consumed 147ms CPU time, 98.3M memory peak. Nov 4 23:55:00.387068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:55:00.572999 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:55:00.594846 (kubelet)[2398]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 23:55:00.658221 kubelet[2398]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:55:00.658844 kubelet[2398]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 23:55:00.658903 kubelet[2398]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:55:00.663216 kubelet[2398]: I1104 23:55:00.661357 2398 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 23:55:02.030565 kubelet[2398]: I1104 23:55:02.030481 2398 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 4 23:55:02.032319 kubelet[2398]: I1104 23:55:02.031236 2398 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 23:55:02.032319 kubelet[2398]: I1104 23:55:02.031712 2398 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 23:55:02.092112 kubelet[2398]: I1104 23:55:02.091237 2398 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 23:55:02.093880 kubelet[2398]: E1104 23:55:02.093808 2398 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://64.227.96.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.227.96.36:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 4 23:55:02.126830 kubelet[2398]: I1104 23:55:02.126793 2398 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 23:55:02.135884 kubelet[2398]: I1104 23:55:02.135617 2398 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 23:55:02.136507 kubelet[2398]: I1104 23:55:02.136451 2398 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 23:55:02.140966 kubelet[2398]: I1104 23:55:02.136662 2398 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.0-n-936e1cfeba","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 23:55:02.140966 kubelet[2398]: I1104 23:55:02.140796 2398 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 23:55:02.140966 kubelet[2398]: I1104 23:55:02.140820 2398 container_manager_linux.go:303] "Creating device plugin manager" Nov 4 23:55:02.144379 kubelet[2398]: I1104 23:55:02.143365 2398 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:55:02.149432 kubelet[2398]: I1104 23:55:02.148903 2398 kubelet.go:480] "Attempting to sync node with API server" Nov 4 23:55:02.149432 kubelet[2398]: I1104 23:55:02.148971 2398 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 23:55:02.149432 kubelet[2398]: I1104 23:55:02.149020 2398 kubelet.go:386] "Adding apiserver pod source" Nov 4 23:55:02.149432 kubelet[2398]: I1104 23:55:02.149064 2398 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 23:55:02.165589 kubelet[2398]: E1104 23:55:02.165505 2398 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://64.227.96.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.0-n-936e1cfeba&limit=500&resourceVersion=0\": dial tcp 64.227.96.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 23:55:02.168491 kubelet[2398]: E1104 23:55:02.168444 2398 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://64.227.96.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.227.96.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 23:55:02.168902 kubelet[2398]: I1104 23:55:02.168871 2398 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 23:55:02.169988 kubelet[2398]: I1104 23:55:02.169802 2398 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 23:55:02.171326 kubelet[2398]: W1104 23:55:02.170838 2398 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 4 23:55:02.180101 kubelet[2398]: I1104 23:55:02.180024 2398 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 23:55:02.180312 kubelet[2398]: I1104 23:55:02.180207 2398 server.go:1289] "Started kubelet" Nov 4 23:55:02.182117 kubelet[2398]: I1104 23:55:02.181339 2398 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 23:55:02.183820 kubelet[2398]: I1104 23:55:02.183782 2398 server.go:317] "Adding debug handlers to kubelet server" Nov 4 23:55:02.189019 kubelet[2398]: I1104 23:55:02.188789 2398 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 23:55:02.189750 kubelet[2398]: I1104 23:55:02.189552 2398 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 23:55:02.193041 kubelet[2398]: I1104 23:55:02.192660 2398 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 23:55:02.193618 kubelet[2398]: E1104 23:55:02.189813 2398 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.227.96.36:6443/api/v1/namespaces/default/events\": dial tcp 64.227.96.36:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4487.0.0-n-936e1cfeba.1874f2f83beffba6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4487.0.0-n-936e1cfeba,UID:ci-4487.0.0-n-936e1cfeba,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4487.0.0-n-936e1cfeba,},FirstTimestamp:2025-11-04 23:55:02.18010103 +0000 UTC m=+1.578903501,LastTimestamp:2025-11-04 23:55:02.18010103 +0000 UTC m=+1.578903501,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487.0.0-n-936e1cfeba,}" Nov 4 23:55:02.193927 kubelet[2398]: I1104 23:55:02.193899 2398 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 23:55:02.199636 kubelet[2398]: E1104 23:55:02.199557 2398 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.0-n-936e1cfeba\" not found" Nov 4 23:55:02.199844 kubelet[2398]: I1104 23:55:02.199670 2398 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 23:55:02.201320 kubelet[2398]: I1104 23:55:02.200058 2398 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 23:55:02.201320 kubelet[2398]: I1104 23:55:02.200189 2398 reconciler.go:26] "Reconciler: start to sync state" Nov 4 23:55:02.201320 kubelet[2398]: E1104 23:55:02.200897 2398 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://64.227.96.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.227.96.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 23:55:02.208312 kubelet[2398]: E1104 23:55:02.208131 2398 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.96.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-n-936e1cfeba?timeout=10s\": dial tcp 64.227.96.36:6443: connect: connection refused" interval="200ms" Nov 4 23:55:02.209834 kubelet[2398]: I1104 23:55:02.209784 2398 factory.go:223] Registration of the systemd container factory successfully Nov 4 23:55:02.210005 kubelet[2398]: I1104 23:55:02.209917 2398 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 23:55:02.216436 kubelet[2398]: E1104 23:55:02.215580 2398 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 23:55:02.217150 kubelet[2398]: I1104 23:55:02.217123 2398 factory.go:223] Registration of the containerd container factory successfully Nov 4 23:55:02.217780 kubelet[2398]: I1104 23:55:02.217625 2398 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 4 23:55:02.258860 kubelet[2398]: I1104 23:55:02.258818 2398 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 23:55:02.259352 kubelet[2398]: I1104 23:55:02.259330 2398 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 23:55:02.259501 kubelet[2398]: I1104 23:55:02.259487 2398 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:55:02.288016 kubelet[2398]: I1104 23:55:02.287267 2398 policy_none.go:49] "None policy: Start" Nov 4 23:55:02.288016 kubelet[2398]: I1104 23:55:02.287347 2398 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 23:55:02.288016 kubelet[2398]: I1104 23:55:02.287394 2398 state_mem.go:35] "Initializing new in-memory state store" Nov 4 23:55:02.291563 kubelet[2398]: I1104 23:55:02.291400 2398 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 4 23:55:02.291563 kubelet[2398]: I1104 23:55:02.291512 2398 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 4 23:55:02.291939 kubelet[2398]: I1104 23:55:02.291815 2398 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 23:55:02.291939 kubelet[2398]: I1104 23:55:02.291835 2398 kubelet.go:2436] "Starting kubelet main sync loop" Nov 4 23:55:02.291939 kubelet[2398]: E1104 23:55:02.291909 2398 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 23:55:02.297792 kubelet[2398]: E1104 23:55:02.297687 2398 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://64.227.96.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.227.96.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 23:55:02.300328 kubelet[2398]: E1104 23:55:02.300257 2398 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.0-n-936e1cfeba\" not found" Nov 4 23:55:02.309726 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 4 23:55:02.330947 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 4 23:55:02.339969 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 4 23:55:02.361147 kubelet[2398]: E1104 23:55:02.361022 2398 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 23:55:02.361821 kubelet[2398]: I1104 23:55:02.361777 2398 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 23:55:02.361941 kubelet[2398]: I1104 23:55:02.361798 2398 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 23:55:02.362571 kubelet[2398]: I1104 23:55:02.362423 2398 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 23:55:02.366273 kubelet[2398]: E1104 23:55:02.365937 2398 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 23:55:02.366273 kubelet[2398]: E1104 23:55:02.366021 2398 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4487.0.0-n-936e1cfeba\" not found" Nov 4 23:55:02.410638 kubelet[2398]: E1104 23:55:02.410589 2398 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.96.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-n-936e1cfeba?timeout=10s\": dial tcp 64.227.96.36:6443: connect: connection refused" interval="400ms" Nov 4 23:55:02.413853 systemd[1]: Created slice kubepods-burstable-pod1fafefed01ef09c448f8698aff010576.slice - libcontainer container kubepods-burstable-pod1fafefed01ef09c448f8698aff010576.slice. Nov 4 23:55:02.425753 kubelet[2398]: E1104 23:55:02.425270 2398 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-936e1cfeba\" not found" node="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:02.430464 systemd[1]: Created slice kubepods-burstable-podcf54b1404203647f90de7ee7a3bd1d97.slice - libcontainer container kubepods-burstable-podcf54b1404203647f90de7ee7a3bd1d97.slice. Nov 4 23:55:02.435151 kubelet[2398]: E1104 23:55:02.434692 2398 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-936e1cfeba\" not found" node="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:02.439566 systemd[1]: Created slice kubepods-burstable-poda9611c93a599ec862e2bb3df133f5ee4.slice - libcontainer container kubepods-burstable-poda9611c93a599ec862e2bb3df133f5ee4.slice. Nov 4 23:55:02.443324 kubelet[2398]: E1104 23:55:02.443029 2398 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-936e1cfeba\" not found" node="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:02.464318 kubelet[2398]: I1104 23:55:02.463934 2398 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:02.465219 kubelet[2398]: E1104 23:55:02.465167 2398 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.227.96.36:6443/api/v1/nodes\": dial tcp 64.227.96.36:6443: connect: connection refused" node="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:02.500988 kubelet[2398]: I1104 23:55:02.500603 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9611c93a599ec862e2bb3df133f5ee4-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.0-n-936e1cfeba\" (UID: \"a9611c93a599ec862e2bb3df133f5ee4\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:02.500988 kubelet[2398]: I1104 23:55:02.500672 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9611c93a599ec862e2bb3df133f5ee4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.0-n-936e1cfeba\" (UID: \"a9611c93a599ec862e2bb3df133f5ee4\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:02.500988 kubelet[2398]: I1104 23:55:02.500707 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1fafefed01ef09c448f8698aff010576-ca-certs\") pod \"kube-apiserver-ci-4487.0.0-n-936e1cfeba\" (UID: \"1fafefed01ef09c448f8698aff010576\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:02.500988 kubelet[2398]: I1104 23:55:02.500730 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1fafefed01ef09c448f8698aff010576-k8s-certs\") pod \"kube-apiserver-ci-4487.0.0-n-936e1cfeba\" (UID: \"1fafefed01ef09c448f8698aff010576\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:02.500988 kubelet[2398]: I1104 23:55:02.500753 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cf54b1404203647f90de7ee7a3bd1d97-kubeconfig\") pod \"kube-scheduler-ci-4487.0.0-n-936e1cfeba\" (UID: \"cf54b1404203647f90de7ee7a3bd1d97\") " pod="kube-system/kube-scheduler-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:02.501393 kubelet[2398]: I1104 23:55:02.500778 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9611c93a599ec862e2bb3df133f5ee4-ca-certs\") pod \"kube-controller-manager-ci-4487.0.0-n-936e1cfeba\" (UID: \"a9611c93a599ec862e2bb3df133f5ee4\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:02.501393 kubelet[2398]: I1104 23:55:02.500805 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a9611c93a599ec862e2bb3df133f5ee4-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.0-n-936e1cfeba\" (UID: \"a9611c93a599ec862e2bb3df133f5ee4\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:02.501393 kubelet[2398]: I1104 23:55:02.500827 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9611c93a599ec862e2bb3df133f5ee4-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.0-n-936e1cfeba\" (UID: \"a9611c93a599ec862e2bb3df133f5ee4\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:02.501393 kubelet[2398]: I1104 23:55:02.500853 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1fafefed01ef09c448f8698aff010576-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.0-n-936e1cfeba\" (UID: \"1fafefed01ef09c448f8698aff010576\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:02.666893 kubelet[2398]: I1104 23:55:02.666848 2398 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:02.667375 kubelet[2398]: E1104 23:55:02.667306 2398 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.227.96.36:6443/api/v1/nodes\": dial tcp 64.227.96.36:6443: connect: connection refused" node="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:02.727177 kubelet[2398]: E1104 23:55:02.726697 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:02.727835 containerd[1607]: time="2025-11-04T23:55:02.727788875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.0-n-936e1cfeba,Uid:1fafefed01ef09c448f8698aff010576,Namespace:kube-system,Attempt:0,}" Nov 4 23:55:02.736297 kubelet[2398]: E1104 23:55:02.736121 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:02.744885 kubelet[2398]: E1104 23:55:02.744557 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:02.745495 containerd[1607]: time="2025-11-04T23:55:02.745163704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.0-n-936e1cfeba,Uid:cf54b1404203647f90de7ee7a3bd1d97,Namespace:kube-system,Attempt:0,}" Nov 4 23:55:02.746703 containerd[1607]: time="2025-11-04T23:55:02.746645775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.0-n-936e1cfeba,Uid:a9611c93a599ec862e2bb3df133f5ee4,Namespace:kube-system,Attempt:0,}" Nov 4 23:55:02.812328 kubelet[2398]: E1104 23:55:02.812099 2398 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.96.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-n-936e1cfeba?timeout=10s\": dial tcp 64.227.96.36:6443: connect: connection refused" interval="800ms" Nov 4 23:55:02.905220 containerd[1607]: time="2025-11-04T23:55:02.905168414Z" level=info msg="connecting to shim 6718196c905cf35179a919a4354c03dfb19a5db3fd9b615ccebc692f18b5096f" address="unix:///run/containerd/s/c554bbd9df1077dc8fc4583785997c9d1f2a1818520acc56fef0feaedced2c76" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:02.907384 containerd[1607]: time="2025-11-04T23:55:02.907293211Z" level=info msg="connecting to shim 795fd0782de22df37d75edfa08ab76dde75433647728d64130538e3cd5348646" address="unix:///run/containerd/s/e55cc6a0a432abeaba99f024141a4e09f34fa294fada4a5802309e5b0df3289c" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:02.910346 containerd[1607]: time="2025-11-04T23:55:02.909916175Z" level=info msg="connecting to shim bd50dbfdfb06a5a5228a9a93baa12bd805dd5a7540979779ac5decd0aee96d06" address="unix:///run/containerd/s/3b2d9a8cd890b7e1bcfaabd40862decb6fd4f88922934c28f9834746b93404f0" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:03.017707 systemd[1]: Started cri-containerd-bd50dbfdfb06a5a5228a9a93baa12bd805dd5a7540979779ac5decd0aee96d06.scope - libcontainer container bd50dbfdfb06a5a5228a9a93baa12bd805dd5a7540979779ac5decd0aee96d06. Nov 4 23:55:03.020318 kubelet[2398]: E1104 23:55:03.019116 2398 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://64.227.96.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.227.96.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 23:55:03.032507 systemd[1]: Started cri-containerd-6718196c905cf35179a919a4354c03dfb19a5db3fd9b615ccebc692f18b5096f.scope - libcontainer container 6718196c905cf35179a919a4354c03dfb19a5db3fd9b615ccebc692f18b5096f. Nov 4 23:55:03.035658 systemd[1]: Started cri-containerd-795fd0782de22df37d75edfa08ab76dde75433647728d64130538e3cd5348646.scope - libcontainer container 795fd0782de22df37d75edfa08ab76dde75433647728d64130538e3cd5348646. Nov 4 23:55:03.070755 kubelet[2398]: I1104 23:55:03.070677 2398 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:03.071877 kubelet[2398]: E1104 23:55:03.071728 2398 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.227.96.36:6443/api/v1/nodes\": dial tcp 64.227.96.36:6443: connect: connection refused" node="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:03.144837 containerd[1607]: time="2025-11-04T23:55:03.142893946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.0-n-936e1cfeba,Uid:1fafefed01ef09c448f8698aff010576,Namespace:kube-system,Attempt:0,} returns sandbox id \"795fd0782de22df37d75edfa08ab76dde75433647728d64130538e3cd5348646\"" Nov 4 23:55:03.150309 kubelet[2398]: E1104 23:55:03.149807 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:03.150470 containerd[1607]: time="2025-11-04T23:55:03.150190565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.0-n-936e1cfeba,Uid:a9611c93a599ec862e2bb3df133f5ee4,Namespace:kube-system,Attempt:0,} returns sandbox id \"6718196c905cf35179a919a4354c03dfb19a5db3fd9b615ccebc692f18b5096f\"" Nov 4 23:55:03.150539 containerd[1607]: time="2025-11-04T23:55:03.150504556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.0-n-936e1cfeba,Uid:cf54b1404203647f90de7ee7a3bd1d97,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd50dbfdfb06a5a5228a9a93baa12bd805dd5a7540979779ac5decd0aee96d06\"" Nov 4 23:55:03.152096 kubelet[2398]: E1104 23:55:03.151568 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:03.153802 kubelet[2398]: E1104 23:55:03.153710 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:03.156724 containerd[1607]: time="2025-11-04T23:55:03.156665785Z" level=info msg="CreateContainer within sandbox \"795fd0782de22df37d75edfa08ab76dde75433647728d64130538e3cd5348646\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 4 23:55:03.161986 containerd[1607]: time="2025-11-04T23:55:03.161921970Z" level=info msg="CreateContainer within sandbox \"bd50dbfdfb06a5a5228a9a93baa12bd805dd5a7540979779ac5decd0aee96d06\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 4 23:55:03.162516 containerd[1607]: time="2025-11-04T23:55:03.162463450Z" level=info msg="CreateContainer within sandbox \"6718196c905cf35179a919a4354c03dfb19a5db3fd9b615ccebc692f18b5096f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 4 23:55:03.170895 containerd[1607]: time="2025-11-04T23:55:03.170771329Z" level=info msg="Container a9038de687bf6d6141644fa87cc9e66ff2d3e75afa236b3892ecae3b6553c28c: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:55:03.178929 containerd[1607]: time="2025-11-04T23:55:03.178621793Z" level=info msg="Container 3c7543fc7794c4985a40b1b0686049ff2afce3c733b54673883bd78071f36504: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:55:03.182002 containerd[1607]: time="2025-11-04T23:55:03.181960991Z" level=info msg="Container 939fcf36008a809def0f3a4873b808f63c6cfc67686f01da32a02e88630c1743: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:55:03.206482 containerd[1607]: time="2025-11-04T23:55:03.206422951Z" level=info msg="CreateContainer within sandbox \"6718196c905cf35179a919a4354c03dfb19a5db3fd9b615ccebc692f18b5096f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"939fcf36008a809def0f3a4873b808f63c6cfc67686f01da32a02e88630c1743\"" Nov 4 23:55:03.207514 containerd[1607]: time="2025-11-04T23:55:03.206828714Z" level=info msg="CreateContainer within sandbox \"bd50dbfdfb06a5a5228a9a93baa12bd805dd5a7540979779ac5decd0aee96d06\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3c7543fc7794c4985a40b1b0686049ff2afce3c733b54673883bd78071f36504\"" Nov 4 23:55:03.208608 containerd[1607]: time="2025-11-04T23:55:03.207838952Z" level=info msg="StartContainer for \"3c7543fc7794c4985a40b1b0686049ff2afce3c733b54673883bd78071f36504\"" Nov 4 23:55:03.208707 containerd[1607]: time="2025-11-04T23:55:03.208653122Z" level=info msg="CreateContainer within sandbox \"795fd0782de22df37d75edfa08ab76dde75433647728d64130538e3cd5348646\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a9038de687bf6d6141644fa87cc9e66ff2d3e75afa236b3892ecae3b6553c28c\"" Nov 4 23:55:03.208897 containerd[1607]: time="2025-11-04T23:55:03.208855620Z" level=info msg="StartContainer for \"939fcf36008a809def0f3a4873b808f63c6cfc67686f01da32a02e88630c1743\"" Nov 4 23:55:03.214287 containerd[1607]: time="2025-11-04T23:55:03.214189219Z" level=info msg="connecting to shim 3c7543fc7794c4985a40b1b0686049ff2afce3c733b54673883bd78071f36504" address="unix:///run/containerd/s/3b2d9a8cd890b7e1bcfaabd40862decb6fd4f88922934c28f9834746b93404f0" protocol=ttrpc version=3 Nov 4 23:55:03.214541 containerd[1607]: time="2025-11-04T23:55:03.214512572Z" level=info msg="connecting to shim 939fcf36008a809def0f3a4873b808f63c6cfc67686f01da32a02e88630c1743" address="unix:///run/containerd/s/c554bbd9df1077dc8fc4583785997c9d1f2a1818520acc56fef0feaedced2c76" protocol=ttrpc version=3 Nov 4 23:55:03.217469 containerd[1607]: time="2025-11-04T23:55:03.217317095Z" level=info msg="StartContainer for \"a9038de687bf6d6141644fa87cc9e66ff2d3e75afa236b3892ecae3b6553c28c\"" Nov 4 23:55:03.220688 containerd[1607]: time="2025-11-04T23:55:03.220574950Z" level=info msg="connecting to shim a9038de687bf6d6141644fa87cc9e66ff2d3e75afa236b3892ecae3b6553c28c" address="unix:///run/containerd/s/e55cc6a0a432abeaba99f024141a4e09f34fa294fada4a5802309e5b0df3289c" protocol=ttrpc version=3 Nov 4 23:55:03.244563 systemd[1]: Started cri-containerd-3c7543fc7794c4985a40b1b0686049ff2afce3c733b54673883bd78071f36504.scope - libcontainer container 3c7543fc7794c4985a40b1b0686049ff2afce3c733b54673883bd78071f36504. Nov 4 23:55:03.278659 systemd[1]: Started cri-containerd-939fcf36008a809def0f3a4873b808f63c6cfc67686f01da32a02e88630c1743.scope - libcontainer container 939fcf36008a809def0f3a4873b808f63c6cfc67686f01da32a02e88630c1743. Nov 4 23:55:03.289614 systemd[1]: Started cri-containerd-a9038de687bf6d6141644fa87cc9e66ff2d3e75afa236b3892ecae3b6553c28c.scope - libcontainer container a9038de687bf6d6141644fa87cc9e66ff2d3e75afa236b3892ecae3b6553c28c. Nov 4 23:55:03.296900 kubelet[2398]: E1104 23:55:03.296776 2398 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://64.227.96.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.0-n-936e1cfeba&limit=500&resourceVersion=0\": dial tcp 64.227.96.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 23:55:03.406416 containerd[1607]: time="2025-11-04T23:55:03.406189526Z" level=info msg="StartContainer for \"939fcf36008a809def0f3a4873b808f63c6cfc67686f01da32a02e88630c1743\" returns successfully" Nov 4 23:55:03.438807 containerd[1607]: time="2025-11-04T23:55:03.438674367Z" level=info msg="StartContainer for \"a9038de687bf6d6141644fa87cc9e66ff2d3e75afa236b3892ecae3b6553c28c\" returns successfully" Nov 4 23:55:03.461638 containerd[1607]: time="2025-11-04T23:55:03.461551239Z" level=info msg="StartContainer for \"3c7543fc7794c4985a40b1b0686049ff2afce3c733b54673883bd78071f36504\" returns successfully" Nov 4 23:55:03.500873 kubelet[2398]: E1104 23:55:03.500817 2398 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://64.227.96.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.227.96.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 23:55:03.614159 kubelet[2398]: E1104 23:55:03.613990 2398 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.96.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-n-936e1cfeba?timeout=10s\": dial tcp 64.227.96.36:6443: connect: connection refused" interval="1.6s" Nov 4 23:55:03.714312 kubelet[2398]: E1104 23:55:03.713467 2398 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://64.227.96.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.227.96.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 23:55:03.873973 kubelet[2398]: I1104 23:55:03.873921 2398 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:04.335830 kubelet[2398]: E1104 23:55:04.335698 2398 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-936e1cfeba\" not found" node="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:04.336452 kubelet[2398]: E1104 23:55:04.335855 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:04.341064 kubelet[2398]: E1104 23:55:04.341022 2398 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-936e1cfeba\" not found" node="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:04.341261 kubelet[2398]: E1104 23:55:04.341215 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:04.346898 kubelet[2398]: E1104 23:55:04.346853 2398 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-936e1cfeba\" not found" node="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:04.347090 kubelet[2398]: E1104 23:55:04.347059 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:05.347667 kubelet[2398]: E1104 23:55:05.347623 2398 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-936e1cfeba\" not found" node="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:05.348913 kubelet[2398]: E1104 23:55:05.347755 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:05.348913 kubelet[2398]: E1104 23:55:05.348002 2398 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-936e1cfeba\" not found" node="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:05.348913 kubelet[2398]: E1104 23:55:05.348083 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:05.349689 kubelet[2398]: E1104 23:55:05.349660 2398 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-936e1cfeba\" not found" node="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:05.349856 kubelet[2398]: E1104 23:55:05.349835 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:06.351022 kubelet[2398]: E1104 23:55:06.350978 2398 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-936e1cfeba\" not found" node="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:06.351477 kubelet[2398]: E1104 23:55:06.351178 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:06.352708 kubelet[2398]: E1104 23:55:06.352677 2398 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-936e1cfeba\" not found" node="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:06.352845 kubelet[2398]: E1104 23:55:06.352827 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:06.610939 kubelet[2398]: I1104 23:55:06.610789 2398 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:06.610939 kubelet[2398]: E1104 23:55:06.610846 2398 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4487.0.0-n-936e1cfeba\": node \"ci-4487.0.0-n-936e1cfeba\" not found" Nov 4 23:55:06.702413 kubelet[2398]: I1104 23:55:06.702367 2398 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:06.710465 kubelet[2398]: E1104 23:55:06.710415 2398 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487.0.0-n-936e1cfeba\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:06.710465 kubelet[2398]: I1104 23:55:06.710462 2398 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:06.712972 kubelet[2398]: E1104 23:55:06.712927 2398 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.0-n-936e1cfeba\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:06.712972 kubelet[2398]: I1104 23:55:06.712958 2398 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:06.714827 kubelet[2398]: E1104 23:55:06.714790 2398 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4487.0.0-n-936e1cfeba\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:07.171916 kubelet[2398]: I1104 23:55:07.171592 2398 apiserver.go:52] "Watching apiserver" Nov 4 23:55:07.200564 kubelet[2398]: I1104 23:55:07.200490 2398 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 23:55:08.274486 kubelet[2398]: I1104 23:55:08.274441 2398 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:08.281681 kubelet[2398]: I1104 23:55:08.281626 2398 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 4 23:55:08.282804 kubelet[2398]: E1104 23:55:08.282755 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:08.354298 kubelet[2398]: E1104 23:55:08.354242 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:09.073259 systemd[1]: Reload requested from client PID 2679 ('systemctl') (unit session-7.scope)... Nov 4 23:55:09.073743 systemd[1]: Reloading... Nov 4 23:55:09.197316 zram_generator::config[2723]: No configuration found. Nov 4 23:55:09.532339 systemd[1]: Reloading finished in 457 ms. Nov 4 23:55:09.571863 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:55:09.595046 systemd[1]: kubelet.service: Deactivated successfully. Nov 4 23:55:09.595839 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:55:09.596081 systemd[1]: kubelet.service: Consumed 1.986s CPU time, 127.1M memory peak. Nov 4 23:55:09.601574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:55:09.824596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:55:09.840380 (kubelet)[2775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 23:55:09.950772 kubelet[2775]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:55:09.951230 kubelet[2775]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 23:55:09.951230 kubelet[2775]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:55:09.951230 kubelet[2775]: I1104 23:55:09.951004 2775 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 23:55:09.964833 kubelet[2775]: I1104 23:55:09.964773 2775 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 4 23:55:09.965345 kubelet[2775]: I1104 23:55:09.965045 2775 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 23:55:09.965645 kubelet[2775]: I1104 23:55:09.965624 2775 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 23:55:09.967831 kubelet[2775]: I1104 23:55:09.967791 2775 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 4 23:55:09.979869 kubelet[2775]: I1104 23:55:09.979820 2775 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 23:55:09.997077 kubelet[2775]: I1104 23:55:09.996993 2775 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 23:55:10.001682 kubelet[2775]: I1104 23:55:10.001632 2775 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 23:55:10.002155 kubelet[2775]: I1104 23:55:10.002123 2775 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 23:55:10.002459 kubelet[2775]: I1104 23:55:10.002237 2775 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.0-n-936e1cfeba","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 23:55:10.002642 kubelet[2775]: I1104 23:55:10.002631 2775 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 23:55:10.002694 kubelet[2775]: I1104 23:55:10.002688 2775 container_manager_linux.go:303] "Creating device plugin manager" Nov 4 23:55:10.002784 kubelet[2775]: I1104 23:55:10.002777 2775 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:55:10.003007 kubelet[2775]: I1104 23:55:10.002994 2775 kubelet.go:480] "Attempting to sync node with API server" Nov 4 23:55:10.003083 kubelet[2775]: I1104 23:55:10.003074 2775 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 23:55:10.003153 kubelet[2775]: I1104 23:55:10.003144 2775 kubelet.go:386] "Adding apiserver pod source" Nov 4 23:55:10.003207 kubelet[2775]: I1104 23:55:10.003200 2775 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 23:55:10.005822 kubelet[2775]: I1104 23:55:10.005783 2775 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 23:55:10.008316 kubelet[2775]: I1104 23:55:10.007401 2775 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 23:55:10.016054 kubelet[2775]: I1104 23:55:10.016015 2775 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 23:55:10.016207 kubelet[2775]: I1104 23:55:10.016095 2775 server.go:1289] "Started kubelet" Nov 4 23:55:10.021303 kubelet[2775]: I1104 23:55:10.021185 2775 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 23:55:10.039260 kubelet[2775]: I1104 23:55:10.036798 2775 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 23:55:10.039260 kubelet[2775]: I1104 23:55:10.038955 2775 server.go:317] "Adding debug handlers to kubelet server" Nov 4 23:55:10.049804 kubelet[2775]: I1104 23:55:10.048078 2775 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 23:55:10.049804 kubelet[2775]: I1104 23:55:10.048348 2775 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 23:55:10.049804 kubelet[2775]: I1104 23:55:10.048639 2775 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 23:55:10.055933 kubelet[2775]: I1104 23:55:10.054208 2775 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 23:55:10.056897 kubelet[2775]: I1104 23:55:10.056817 2775 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 23:55:10.057032 kubelet[2775]: I1104 23:55:10.056955 2775 reconciler.go:26] "Reconciler: start to sync state" Nov 4 23:55:10.065070 kubelet[2775]: I1104 23:55:10.064549 2775 factory.go:223] Registration of the systemd container factory successfully Nov 4 23:55:10.065070 kubelet[2775]: I1104 23:55:10.064680 2775 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 23:55:10.071567 kubelet[2775]: I1104 23:55:10.071525 2775 factory.go:223] Registration of the containerd container factory successfully Nov 4 23:55:10.080437 kubelet[2775]: I1104 23:55:10.079934 2775 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 4 23:55:10.086468 kubelet[2775]: I1104 23:55:10.086423 2775 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 4 23:55:10.087189 kubelet[2775]: I1104 23:55:10.086699 2775 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 4 23:55:10.087189 kubelet[2775]: I1104 23:55:10.086744 2775 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 23:55:10.087189 kubelet[2775]: I1104 23:55:10.086756 2775 kubelet.go:2436] "Starting kubelet main sync loop" Nov 4 23:55:10.087189 kubelet[2775]: E1104 23:55:10.086837 2775 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 23:55:10.107953 kubelet[2775]: E1104 23:55:10.107920 2775 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 23:55:10.166917 kubelet[2775]: I1104 23:55:10.166819 2775 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 23:55:10.166917 kubelet[2775]: I1104 23:55:10.166842 2775 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 23:55:10.166917 kubelet[2775]: I1104 23:55:10.166919 2775 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:55:10.167197 kubelet[2775]: I1104 23:55:10.167158 2775 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 4 23:55:10.167238 kubelet[2775]: I1104 23:55:10.167180 2775 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 4 23:55:10.167238 kubelet[2775]: I1104 23:55:10.167215 2775 policy_none.go:49] "None policy: Start" Nov 4 23:55:10.167238 kubelet[2775]: I1104 23:55:10.167229 2775 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 23:55:10.167323 kubelet[2775]: I1104 23:55:10.167245 2775 state_mem.go:35] "Initializing new in-memory state store" Nov 4 23:55:10.167416 kubelet[2775]: I1104 23:55:10.167395 2775 state_mem.go:75] "Updated machine memory state" Nov 4 23:55:10.179804 kubelet[2775]: E1104 23:55:10.179437 2775 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 23:55:10.181994 kubelet[2775]: I1104 23:55:10.181589 2775 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 23:55:10.181994 kubelet[2775]: I1104 23:55:10.181613 2775 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 23:55:10.182150 kubelet[2775]: I1104 23:55:10.182013 2775 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 23:55:10.194052 kubelet[2775]: E1104 23:55:10.193228 2775 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 23:55:10.196325 kubelet[2775]: I1104 23:55:10.195999 2775 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:10.203009 kubelet[2775]: I1104 23:55:10.201653 2775 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:10.203009 kubelet[2775]: I1104 23:55:10.201787 2775 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:10.209757 kubelet[2775]: I1104 23:55:10.209433 2775 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 4 23:55:10.209757 kubelet[2775]: E1104 23:55:10.209513 2775 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.0-n-936e1cfeba\" already exists" pod="kube-system/kube-apiserver-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:10.210587 kubelet[2775]: I1104 23:55:10.210507 2775 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 4 23:55:10.220141 kubelet[2775]: I1104 23:55:10.220095 2775 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 4 23:55:10.293655 kubelet[2775]: I1104 23:55:10.293626 2775 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:10.305458 kubelet[2775]: I1104 23:55:10.305339 2775 kubelet_node_status.go:124] "Node was previously registered" node="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:10.305955 kubelet[2775]: I1104 23:55:10.305892 2775 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:10.358079 kubelet[2775]: I1104 23:55:10.357768 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1fafefed01ef09c448f8698aff010576-ca-certs\") pod \"kube-apiserver-ci-4487.0.0-n-936e1cfeba\" (UID: \"1fafefed01ef09c448f8698aff010576\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:10.358557 kubelet[2775]: I1104 23:55:10.358370 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1fafefed01ef09c448f8698aff010576-k8s-certs\") pod \"kube-apiserver-ci-4487.0.0-n-936e1cfeba\" (UID: \"1fafefed01ef09c448f8698aff010576\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:10.358557 kubelet[2775]: I1104 23:55:10.358443 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1fafefed01ef09c448f8698aff010576-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.0-n-936e1cfeba\" (UID: \"1fafefed01ef09c448f8698aff010576\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:10.358557 kubelet[2775]: I1104 23:55:10.358508 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9611c93a599ec862e2bb3df133f5ee4-ca-certs\") pod \"kube-controller-manager-ci-4487.0.0-n-936e1cfeba\" (UID: \"a9611c93a599ec862e2bb3df133f5ee4\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:10.358914 kubelet[2775]: I1104 23:55:10.358799 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a9611c93a599ec862e2bb3df133f5ee4-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.0-n-936e1cfeba\" (UID: \"a9611c93a599ec862e2bb3df133f5ee4\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:10.358914 kubelet[2775]: I1104 23:55:10.358890 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cf54b1404203647f90de7ee7a3bd1d97-kubeconfig\") pod \"kube-scheduler-ci-4487.0.0-n-936e1cfeba\" (UID: \"cf54b1404203647f90de7ee7a3bd1d97\") " pod="kube-system/kube-scheduler-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:10.359180 kubelet[2775]: I1104 23:55:10.359114 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9611c93a599ec862e2bb3df133f5ee4-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.0-n-936e1cfeba\" (UID: \"a9611c93a599ec862e2bb3df133f5ee4\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:10.359325 kubelet[2775]: I1104 23:55:10.359254 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9611c93a599ec862e2bb3df133f5ee4-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.0-n-936e1cfeba\" (UID: \"a9611c93a599ec862e2bb3df133f5ee4\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:10.359515 kubelet[2775]: I1104 23:55:10.359460 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9611c93a599ec862e2bb3df133f5ee4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.0-n-936e1cfeba\" (UID: \"a9611c93a599ec862e2bb3df133f5ee4\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:10.511052 kubelet[2775]: E1104 23:55:10.510610 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:10.515929 kubelet[2775]: E1104 23:55:10.515844 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:10.522437 kubelet[2775]: E1104 23:55:10.522401 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:11.010490 kubelet[2775]: I1104 23:55:11.009681 2775 apiserver.go:52] "Watching apiserver" Nov 4 23:55:11.057261 kubelet[2775]: I1104 23:55:11.057196 2775 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 23:55:11.111007 kubelet[2775]: I1104 23:55:11.110934 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4487.0.0-n-936e1cfeba" podStartSLOduration=3.110883935 podStartE2EDuration="3.110883935s" podCreationTimestamp="2025-11-04 23:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:55:11.110157748 +0000 UTC m=+1.255632846" watchObservedRunningTime="2025-11-04 23:55:11.110883935 +0000 UTC m=+1.256359016" Nov 4 23:55:11.133977 kubelet[2775]: I1104 23:55:11.133070 2775 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:11.133977 kubelet[2775]: E1104 23:55:11.133555 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:11.134990 kubelet[2775]: E1104 23:55:11.134697 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:11.150168 kubelet[2775]: I1104 23:55:11.150101 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4487.0.0-n-936e1cfeba" podStartSLOduration=1.150081192 podStartE2EDuration="1.150081192s" podCreationTimestamp="2025-11-04 23:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:55:11.124977534 +0000 UTC m=+1.270452628" watchObservedRunningTime="2025-11-04 23:55:11.150081192 +0000 UTC m=+1.295556296" Nov 4 23:55:11.152392 kubelet[2775]: I1104 23:55:11.152304 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4487.0.0-n-936e1cfeba" podStartSLOduration=1.151953132 podStartE2EDuration="1.151953132s" podCreationTimestamp="2025-11-04 23:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:55:11.151569345 +0000 UTC m=+1.297044445" watchObservedRunningTime="2025-11-04 23:55:11.151953132 +0000 UTC m=+1.297428235" Nov 4 23:55:11.161849 kubelet[2775]: I1104 23:55:11.161819 2775 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 4 23:55:11.163703 kubelet[2775]: E1104 23:55:11.163576 2775 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.0-n-936e1cfeba\" already exists" pod="kube-system/kube-apiserver-ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:11.164460 kubelet[2775]: E1104 23:55:11.164232 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:12.135187 kubelet[2775]: E1104 23:55:12.135039 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:12.135187 kubelet[2775]: E1104 23:55:12.135121 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:13.121773 systemd-resolved[1285]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Nov 4 23:55:13.987191 systemd-timesyncd[1462]: Contacted time server 166.88.142.52:123 (2.flatcar.pool.ntp.org). Nov 4 23:55:13.987245 systemd-resolved[1285]: Clock change detected. Flushing caches. Nov 4 23:55:13.987283 systemd-timesyncd[1462]: Initial clock synchronization to Tue 2025-11-04 23:55:13.986641 UTC. Nov 4 23:55:14.810090 kubelet[2775]: I1104 23:55:14.810036 2775 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 4 23:55:14.811130 containerd[1607]: time="2025-11-04T23:55:14.811080095Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 4 23:55:14.811801 kubelet[2775]: I1104 23:55:14.811610 2775 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 4 23:55:14.925554 kubelet[2775]: E1104 23:55:14.925485 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:15.830754 kubelet[2775]: E1104 23:55:15.830720 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:15.909114 systemd[1]: Created slice kubepods-besteffort-pod8b579856_fda9_44bd_be21_80559cdd7cd3.slice - libcontainer container kubepods-besteffort-pod8b579856_fda9_44bd_be21_80559cdd7cd3.slice. Nov 4 23:55:15.980813 kubelet[2775]: I1104 23:55:15.980748 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b579856-fda9-44bd-be21-80559cdd7cd3-xtables-lock\") pod \"kube-proxy-c59qd\" (UID: \"8b579856-fda9-44bd-be21-80559cdd7cd3\") " pod="kube-system/kube-proxy-c59qd" Nov 4 23:55:15.980813 kubelet[2775]: I1104 23:55:15.980803 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7v9c\" (UniqueName: \"kubernetes.io/projected/8b579856-fda9-44bd-be21-80559cdd7cd3-kube-api-access-h7v9c\") pod \"kube-proxy-c59qd\" (UID: \"8b579856-fda9-44bd-be21-80559cdd7cd3\") " pod="kube-system/kube-proxy-c59qd" Nov 4 23:55:15.981076 kubelet[2775]: I1104 23:55:15.980850 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8b579856-fda9-44bd-be21-80559cdd7cd3-kube-proxy\") pod \"kube-proxy-c59qd\" (UID: \"8b579856-fda9-44bd-be21-80559cdd7cd3\") " pod="kube-system/kube-proxy-c59qd" Nov 4 23:55:15.981076 kubelet[2775]: I1104 23:55:15.980868 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b579856-fda9-44bd-be21-80559cdd7cd3-lib-modules\") pod \"kube-proxy-c59qd\" (UID: \"8b579856-fda9-44bd-be21-80559cdd7cd3\") " pod="kube-system/kube-proxy-c59qd" Nov 4 23:55:16.091289 systemd[1]: Created slice kubepods-besteffort-pod767212d7_61c6_4dc3_aace_475930f95a88.slice - libcontainer container kubepods-besteffort-pod767212d7_61c6_4dc3_aace_475930f95a88.slice. Nov 4 23:55:16.182466 kubelet[2775]: I1104 23:55:16.182407 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/767212d7-61c6-4dc3-aace-475930f95a88-var-lib-calico\") pod \"tigera-operator-7dcd859c48-p5lp4\" (UID: \"767212d7-61c6-4dc3-aace-475930f95a88\") " pod="tigera-operator/tigera-operator-7dcd859c48-p5lp4" Nov 4 23:55:16.182820 kubelet[2775]: I1104 23:55:16.182794 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzsts\" (UniqueName: \"kubernetes.io/projected/767212d7-61c6-4dc3-aace-475930f95a88-kube-api-access-dzsts\") pod \"tigera-operator-7dcd859c48-p5lp4\" (UID: \"767212d7-61c6-4dc3-aace-475930f95a88\") " pod="tigera-operator/tigera-operator-7dcd859c48-p5lp4" Nov 4 23:55:16.218680 kubelet[2775]: E1104 23:55:16.218304 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:16.219385 containerd[1607]: time="2025-11-04T23:55:16.219342117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c59qd,Uid:8b579856-fda9-44bd-be21-80559cdd7cd3,Namespace:kube-system,Attempt:0,}" Nov 4 23:55:16.245437 containerd[1607]: time="2025-11-04T23:55:16.245275091Z" level=info msg="connecting to shim 688bffef87fe743350e7a1f5b7d87e81e561f47ba39ebbe0e6d7254be22b8c9e" address="unix:///run/containerd/s/0bba34442bda7a5783621cc32c3c7b6234017272bcd375f642f5a704fcb9e62f" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:16.284073 systemd[1]: Started cri-containerd-688bffef87fe743350e7a1f5b7d87e81e561f47ba39ebbe0e6d7254be22b8c9e.scope - libcontainer container 688bffef87fe743350e7a1f5b7d87e81e561f47ba39ebbe0e6d7254be22b8c9e. Nov 4 23:55:16.331119 containerd[1607]: time="2025-11-04T23:55:16.331055735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c59qd,Uid:8b579856-fda9-44bd-be21-80559cdd7cd3,Namespace:kube-system,Attempt:0,} returns sandbox id \"688bffef87fe743350e7a1f5b7d87e81e561f47ba39ebbe0e6d7254be22b8c9e\"" Nov 4 23:55:16.332478 kubelet[2775]: E1104 23:55:16.332446 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:16.340078 containerd[1607]: time="2025-11-04T23:55:16.340002667Z" level=info msg="CreateContainer within sandbox \"688bffef87fe743350e7a1f5b7d87e81e561f47ba39ebbe0e6d7254be22b8c9e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 4 23:55:16.352937 containerd[1607]: time="2025-11-04T23:55:16.352752231Z" level=info msg="Container 83690263633bb02e208cc3e1e2ef215b990611bd799fbe7dadf448b88b895b87: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:55:16.362938 containerd[1607]: time="2025-11-04T23:55:16.362895936Z" level=info msg="CreateContainer within sandbox \"688bffef87fe743350e7a1f5b7d87e81e561f47ba39ebbe0e6d7254be22b8c9e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"83690263633bb02e208cc3e1e2ef215b990611bd799fbe7dadf448b88b895b87\"" Nov 4 23:55:16.365787 containerd[1607]: time="2025-11-04T23:55:16.365744137Z" level=info msg="StartContainer for \"83690263633bb02e208cc3e1e2ef215b990611bd799fbe7dadf448b88b895b87\"" Nov 4 23:55:16.368386 containerd[1607]: time="2025-11-04T23:55:16.368338106Z" level=info msg="connecting to shim 83690263633bb02e208cc3e1e2ef215b990611bd799fbe7dadf448b88b895b87" address="unix:///run/containerd/s/0bba34442bda7a5783621cc32c3c7b6234017272bcd375f642f5a704fcb9e62f" protocol=ttrpc version=3 Nov 4 23:55:16.398519 containerd[1607]: time="2025-11-04T23:55:16.398450175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-p5lp4,Uid:767212d7-61c6-4dc3-aace-475930f95a88,Namespace:tigera-operator,Attempt:0,}" Nov 4 23:55:16.399279 systemd[1]: Started cri-containerd-83690263633bb02e208cc3e1e2ef215b990611bd799fbe7dadf448b88b895b87.scope - libcontainer container 83690263633bb02e208cc3e1e2ef215b990611bd799fbe7dadf448b88b895b87. Nov 4 23:55:16.424618 containerd[1607]: time="2025-11-04T23:55:16.424571671Z" level=info msg="connecting to shim 5ffd212a9157f4c3a8291c7c06d6c8d1367acda88ceffc8f9d236d698ed3d1ae" address="unix:///run/containerd/s/96fd8d6ef54d401e63ec21dbd77822cb162ba36d1c60e15c074452ca9b0a7208" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:16.465072 systemd[1]: Started cri-containerd-5ffd212a9157f4c3a8291c7c06d6c8d1367acda88ceffc8f9d236d698ed3d1ae.scope - libcontainer container 5ffd212a9157f4c3a8291c7c06d6c8d1367acda88ceffc8f9d236d698ed3d1ae. Nov 4 23:55:16.484593 containerd[1607]: time="2025-11-04T23:55:16.484539405Z" level=info msg="StartContainer for \"83690263633bb02e208cc3e1e2ef215b990611bd799fbe7dadf448b88b895b87\" returns successfully" Nov 4 23:55:16.553402 containerd[1607]: time="2025-11-04T23:55:16.553345694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-p5lp4,Uid:767212d7-61c6-4dc3-aace-475930f95a88,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5ffd212a9157f4c3a8291c7c06d6c8d1367acda88ceffc8f9d236d698ed3d1ae\"" Nov 4 23:55:16.557706 containerd[1607]: time="2025-11-04T23:55:16.557657591Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 4 23:55:16.797683 kubelet[2775]: E1104 23:55:16.797480 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:16.837548 kubelet[2775]: E1104 23:55:16.837500 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:16.839330 kubelet[2775]: E1104 23:55:16.839278 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:16.878861 kubelet[2775]: I1104 23:55:16.878569 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c59qd" podStartSLOduration=1.8785528550000001 podStartE2EDuration="1.878552855s" podCreationTimestamp="2025-11-04 23:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:55:16.863886456 +0000 UTC m=+6.320809894" watchObservedRunningTime="2025-11-04 23:55:16.878552855 +0000 UTC m=+6.335476294" Nov 4 23:55:17.086297 kubelet[2775]: E1104 23:55:17.086234 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:17.109082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount627270867.mount: Deactivated successfully. Nov 4 23:55:17.802633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3025825263.mount: Deactivated successfully. Nov 4 23:55:17.844607 kubelet[2775]: E1104 23:55:17.843214 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:19.692337 containerd[1607]: time="2025-11-04T23:55:19.692264003Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:19.693637 containerd[1607]: time="2025-11-04T23:55:19.693445937Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 4 23:55:19.693964 containerd[1607]: time="2025-11-04T23:55:19.693928815Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:19.696685 containerd[1607]: time="2025-11-04T23:55:19.696635993Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:19.698054 containerd[1607]: time="2025-11-04T23:55:19.697899911Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.140194548s" Nov 4 23:55:19.698054 containerd[1607]: time="2025-11-04T23:55:19.697945562Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 4 23:55:19.703489 containerd[1607]: time="2025-11-04T23:55:19.703387973Z" level=info msg="CreateContainer within sandbox \"5ffd212a9157f4c3a8291c7c06d6c8d1367acda88ceffc8f9d236d698ed3d1ae\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 4 23:55:19.714306 containerd[1607]: time="2025-11-04T23:55:19.713202790Z" level=info msg="Container d1d2d4e79b989fd9fddf64316129bbc74f1202b7ff9a523f72e99daf47aa2233: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:55:19.722705 containerd[1607]: time="2025-11-04T23:55:19.722571508Z" level=info msg="CreateContainer within sandbox \"5ffd212a9157f4c3a8291c7c06d6c8d1367acda88ceffc8f9d236d698ed3d1ae\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d1d2d4e79b989fd9fddf64316129bbc74f1202b7ff9a523f72e99daf47aa2233\"" Nov 4 23:55:19.723750 containerd[1607]: time="2025-11-04T23:55:19.723658141Z" level=info msg="StartContainer for \"d1d2d4e79b989fd9fddf64316129bbc74f1202b7ff9a523f72e99daf47aa2233\"" Nov 4 23:55:19.726287 containerd[1607]: time="2025-11-04T23:55:19.726247785Z" level=info msg="connecting to shim d1d2d4e79b989fd9fddf64316129bbc74f1202b7ff9a523f72e99daf47aa2233" address="unix:///run/containerd/s/96fd8d6ef54d401e63ec21dbd77822cb162ba36d1c60e15c074452ca9b0a7208" protocol=ttrpc version=3 Nov 4 23:55:19.757177 systemd[1]: Started cri-containerd-d1d2d4e79b989fd9fddf64316129bbc74f1202b7ff9a523f72e99daf47aa2233.scope - libcontainer container d1d2d4e79b989fd9fddf64316129bbc74f1202b7ff9a523f72e99daf47aa2233. Nov 4 23:55:19.799302 containerd[1607]: time="2025-11-04T23:55:19.799224139Z" level=info msg="StartContainer for \"d1d2d4e79b989fd9fddf64316129bbc74f1202b7ff9a523f72e99daf47aa2233\" returns successfully" Nov 4 23:55:22.852021 update_engine[1575]: I20251104 23:55:22.851900 1575 update_attempter.cc:509] Updating boot flags... Nov 4 23:55:26.916255 sudo[1833]: pam_unix(sudo:session): session closed for user root Nov 4 23:55:26.923179 sshd[1832]: Connection closed by 139.178.89.65 port 55890 Nov 4 23:55:26.922366 sshd-session[1829]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:26.932531 systemd[1]: sshd@6-64.227.96.36:22-139.178.89.65:55890.service: Deactivated successfully. Nov 4 23:55:26.938399 systemd[1]: session-7.scope: Deactivated successfully. Nov 4 23:55:26.938799 systemd[1]: session-7.scope: Consumed 5.724s CPU time, 160.4M memory peak. Nov 4 23:55:26.944941 systemd-logind[1574]: Session 7 logged out. Waiting for processes to exit. Nov 4 23:55:26.949221 systemd-logind[1574]: Removed session 7. Nov 4 23:55:34.129549 kubelet[2775]: I1104 23:55:34.129167 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-p5lp4" podStartSLOduration=14.98579402 podStartE2EDuration="18.129149157s" podCreationTimestamp="2025-11-04 23:55:16 +0000 UTC" firstStartedPulling="2025-11-04 23:55:16.555927609 +0000 UTC m=+6.012851040" lastFinishedPulling="2025-11-04 23:55:19.69928274 +0000 UTC m=+9.156206177" observedRunningTime="2025-11-04 23:55:19.86899675 +0000 UTC m=+9.325920189" watchObservedRunningTime="2025-11-04 23:55:34.129149157 +0000 UTC m=+23.586072594" Nov 4 23:55:34.145770 systemd[1]: Created slice kubepods-besteffort-pod7df1fa2a_8fa4_4f60_a652_7af216af02a1.slice - libcontainer container kubepods-besteffort-pod7df1fa2a_8fa4_4f60_a652_7af216af02a1.slice. Nov 4 23:55:34.207526 kubelet[2775]: I1104 23:55:34.207397 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7df1fa2a-8fa4-4f60-a652-7af216af02a1-tigera-ca-bundle\") pod \"calico-typha-64dbc6fccb-8krqg\" (UID: \"7df1fa2a-8fa4-4f60-a652-7af216af02a1\") " pod="calico-system/calico-typha-64dbc6fccb-8krqg" Nov 4 23:55:34.207848 kubelet[2775]: I1104 23:55:34.207756 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7df1fa2a-8fa4-4f60-a652-7af216af02a1-typha-certs\") pod \"calico-typha-64dbc6fccb-8krqg\" (UID: \"7df1fa2a-8fa4-4f60-a652-7af216af02a1\") " pod="calico-system/calico-typha-64dbc6fccb-8krqg" Nov 4 23:55:34.207848 kubelet[2775]: I1104 23:55:34.207814 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sswdh\" (UniqueName: \"kubernetes.io/projected/7df1fa2a-8fa4-4f60-a652-7af216af02a1-kube-api-access-sswdh\") pod \"calico-typha-64dbc6fccb-8krqg\" (UID: \"7df1fa2a-8fa4-4f60-a652-7af216af02a1\") " pod="calico-system/calico-typha-64dbc6fccb-8krqg" Nov 4 23:55:34.266374 systemd[1]: Created slice kubepods-besteffort-pod1bf977cf_6643_458a_8b0d_bbbccaaf2a1b.slice - libcontainer container kubepods-besteffort-pod1bf977cf_6643_458a_8b0d_bbbccaaf2a1b.slice. Nov 4 23:55:34.309029 kubelet[2775]: I1104 23:55:34.308947 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1bf977cf-6643-458a-8b0d-bbbccaaf2a1b-xtables-lock\") pod \"calico-node-t2ljf\" (UID: \"1bf977cf-6643-458a-8b0d-bbbccaaf2a1b\") " pod="calico-system/calico-node-t2ljf" Nov 4 23:55:34.309253 kubelet[2775]: I1104 23:55:34.309065 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zlz5\" (UniqueName: \"kubernetes.io/projected/1bf977cf-6643-458a-8b0d-bbbccaaf2a1b-kube-api-access-5zlz5\") pod \"calico-node-t2ljf\" (UID: \"1bf977cf-6643-458a-8b0d-bbbccaaf2a1b\") " pod="calico-system/calico-node-t2ljf" Nov 4 23:55:34.309253 kubelet[2775]: I1104 23:55:34.309099 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1bf977cf-6643-458a-8b0d-bbbccaaf2a1b-var-run-calico\") pod \"calico-node-t2ljf\" (UID: \"1bf977cf-6643-458a-8b0d-bbbccaaf2a1b\") " pod="calico-system/calico-node-t2ljf" Nov 4 23:55:34.309253 kubelet[2775]: I1104 23:55:34.309124 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1bf977cf-6643-458a-8b0d-bbbccaaf2a1b-flexvol-driver-host\") pod \"calico-node-t2ljf\" (UID: \"1bf977cf-6643-458a-8b0d-bbbccaaf2a1b\") " pod="calico-system/calico-node-t2ljf" Nov 4 23:55:34.309253 kubelet[2775]: I1104 23:55:34.309162 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf977cf-6643-458a-8b0d-bbbccaaf2a1b-tigera-ca-bundle\") pod \"calico-node-t2ljf\" (UID: \"1bf977cf-6643-458a-8b0d-bbbccaaf2a1b\") " pod="calico-system/calico-node-t2ljf" Nov 4 23:55:34.309253 kubelet[2775]: I1104 23:55:34.309180 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1bf977cf-6643-458a-8b0d-bbbccaaf2a1b-var-lib-calico\") pod \"calico-node-t2ljf\" (UID: \"1bf977cf-6643-458a-8b0d-bbbccaaf2a1b\") " pod="calico-system/calico-node-t2ljf" Nov 4 23:55:34.309479 kubelet[2775]: I1104 23:55:34.309200 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1bf977cf-6643-458a-8b0d-bbbccaaf2a1b-policysync\") pod \"calico-node-t2ljf\" (UID: \"1bf977cf-6643-458a-8b0d-bbbccaaf2a1b\") " pod="calico-system/calico-node-t2ljf" Nov 4 23:55:34.309479 kubelet[2775]: I1104 23:55:34.309259 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1bf977cf-6643-458a-8b0d-bbbccaaf2a1b-cni-bin-dir\") pod \"calico-node-t2ljf\" (UID: \"1bf977cf-6643-458a-8b0d-bbbccaaf2a1b\") " pod="calico-system/calico-node-t2ljf" Nov 4 23:55:34.309479 kubelet[2775]: I1104 23:55:34.309280 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1bf977cf-6643-458a-8b0d-bbbccaaf2a1b-cni-net-dir\") pod \"calico-node-t2ljf\" (UID: \"1bf977cf-6643-458a-8b0d-bbbccaaf2a1b\") " pod="calico-system/calico-node-t2ljf" Nov 4 23:55:34.309479 kubelet[2775]: I1104 23:55:34.309301 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1bf977cf-6643-458a-8b0d-bbbccaaf2a1b-lib-modules\") pod \"calico-node-t2ljf\" (UID: \"1bf977cf-6643-458a-8b0d-bbbccaaf2a1b\") " pod="calico-system/calico-node-t2ljf" Nov 4 23:55:34.309479 kubelet[2775]: I1104 23:55:34.309337 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1bf977cf-6643-458a-8b0d-bbbccaaf2a1b-node-certs\") pod \"calico-node-t2ljf\" (UID: \"1bf977cf-6643-458a-8b0d-bbbccaaf2a1b\") " pod="calico-system/calico-node-t2ljf" Nov 4 23:55:34.309698 kubelet[2775]: I1104 23:55:34.309367 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1bf977cf-6643-458a-8b0d-bbbccaaf2a1b-cni-log-dir\") pod \"calico-node-t2ljf\" (UID: \"1bf977cf-6643-458a-8b0d-bbbccaaf2a1b\") " pod="calico-system/calico-node-t2ljf" Nov 4 23:55:34.386054 kubelet[2775]: E1104 23:55:34.385628 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sppm" podUID="c9154d8d-6fa3-4eb3-9ec8-93848d59c99a" Nov 4 23:55:34.410189 kubelet[2775]: I1104 23:55:34.410121 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9154d8d-6fa3-4eb3-9ec8-93848d59c99a-kubelet-dir\") pod \"csi-node-driver-8sppm\" (UID: \"c9154d8d-6fa3-4eb3-9ec8-93848d59c99a\") " pod="calico-system/csi-node-driver-8sppm" Nov 4 23:55:34.410189 kubelet[2775]: I1104 23:55:34.410170 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c9154d8d-6fa3-4eb3-9ec8-93848d59c99a-varrun\") pod \"csi-node-driver-8sppm\" (UID: \"c9154d8d-6fa3-4eb3-9ec8-93848d59c99a\") " pod="calico-system/csi-node-driver-8sppm" Nov 4 23:55:34.410428 kubelet[2775]: I1104 23:55:34.410293 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c9154d8d-6fa3-4eb3-9ec8-93848d59c99a-registration-dir\") pod \"csi-node-driver-8sppm\" (UID: \"c9154d8d-6fa3-4eb3-9ec8-93848d59c99a\") " pod="calico-system/csi-node-driver-8sppm" Nov 4 23:55:34.410428 kubelet[2775]: I1104 23:55:34.410393 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c9154d8d-6fa3-4eb3-9ec8-93848d59c99a-socket-dir\") pod \"csi-node-driver-8sppm\" (UID: \"c9154d8d-6fa3-4eb3-9ec8-93848d59c99a\") " pod="calico-system/csi-node-driver-8sppm" Nov 4 23:55:34.410428 kubelet[2775]: I1104 23:55:34.410419 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7shc\" (UniqueName: \"kubernetes.io/projected/c9154d8d-6fa3-4eb3-9ec8-93848d59c99a-kube-api-access-m7shc\") pod \"csi-node-driver-8sppm\" (UID: \"c9154d8d-6fa3-4eb3-9ec8-93848d59c99a\") " pod="calico-system/csi-node-driver-8sppm" Nov 4 23:55:34.414130 kubelet[2775]: E1104 23:55:34.414081 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.414130 kubelet[2775]: W1104 23:55:34.414109 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.414130 kubelet[2775]: E1104 23:55:34.414137 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.414565 kubelet[2775]: E1104 23:55:34.414398 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.414565 kubelet[2775]: W1104 23:55:34.414414 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.414565 kubelet[2775]: E1104 23:55:34.414427 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.414756 kubelet[2775]: E1104 23:55:34.414647 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.414756 kubelet[2775]: W1104 23:55:34.414657 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.414756 kubelet[2775]: E1104 23:55:34.414668 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.414908 kubelet[2775]: E1104 23:55:34.414857 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.414908 kubelet[2775]: W1104 23:55:34.414866 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.414908 kubelet[2775]: E1104 23:55:34.414877 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.416420 kubelet[2775]: E1104 23:55:34.415037 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.416420 kubelet[2775]: W1104 23:55:34.415057 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.416420 kubelet[2775]: E1104 23:55:34.415067 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.416420 kubelet[2775]: E1104 23:55:34.415280 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.416420 kubelet[2775]: W1104 23:55:34.415290 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.416420 kubelet[2775]: E1104 23:55:34.415301 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.417359 kubelet[2775]: E1104 23:55:34.417336 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.417359 kubelet[2775]: W1104 23:55:34.417355 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.417528 kubelet[2775]: E1104 23:55:34.417373 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.417672 kubelet[2775]: E1104 23:55:34.417654 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.417672 kubelet[2775]: W1104 23:55:34.417670 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.418000 kubelet[2775]: E1104 23:55:34.417684 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.418000 kubelet[2775]: E1104 23:55:34.417896 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.418000 kubelet[2775]: W1104 23:55:34.417907 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.418000 kubelet[2775]: E1104 23:55:34.417919 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.418476 kubelet[2775]: E1104 23:55:34.418134 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.418476 kubelet[2775]: W1104 23:55:34.418144 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.418476 kubelet[2775]: E1104 23:55:34.418155 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.418476 kubelet[2775]: E1104 23:55:34.418346 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.418476 kubelet[2775]: W1104 23:55:34.418355 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.418476 kubelet[2775]: E1104 23:55:34.418366 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.419922 kubelet[2775]: E1104 23:55:34.418981 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.419922 kubelet[2775]: W1104 23:55:34.418999 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.419922 kubelet[2775]: E1104 23:55:34.419036 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.419922 kubelet[2775]: E1104 23:55:34.419263 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.419922 kubelet[2775]: W1104 23:55:34.419273 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.419922 kubelet[2775]: E1104 23:55:34.419285 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.420212 kubelet[2775]: E1104 23:55:34.420043 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.420212 kubelet[2775]: W1104 23:55:34.420056 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.420212 kubelet[2775]: E1104 23:55:34.420071 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.420340 kubelet[2775]: E1104 23:55:34.420309 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.420340 kubelet[2775]: W1104 23:55:34.420320 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.420340 kubelet[2775]: E1104 23:55:34.420333 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.421578 kubelet[2775]: E1104 23:55:34.420491 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.421578 kubelet[2775]: W1104 23:55:34.420506 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.421578 kubelet[2775]: E1104 23:55:34.420517 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.421578 kubelet[2775]: E1104 23:55:34.420811 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.421578 kubelet[2775]: W1104 23:55:34.420844 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.421578 kubelet[2775]: E1104 23:55:34.420864 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.422437 kubelet[2775]: E1104 23:55:34.422009 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.422437 kubelet[2775]: W1104 23:55:34.422029 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.422437 kubelet[2775]: E1104 23:55:34.422049 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.422939 kubelet[2775]: E1104 23:55:34.422871 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.423797 kubelet[2775]: W1104 23:55:34.423703 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.423797 kubelet[2775]: E1104 23:55:34.423730 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.424454 kubelet[2775]: E1104 23:55:34.424393 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.424454 kubelet[2775]: W1104 23:55:34.424412 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.424454 kubelet[2775]: E1104 23:55:34.424428 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.425132 kubelet[2775]: E1104 23:55:34.424910 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.425132 kubelet[2775]: W1104 23:55:34.424926 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.425132 kubelet[2775]: E1104 23:55:34.424940 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.425417 kubelet[2775]: E1104 23:55:34.425313 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.425417 kubelet[2775]: W1104 23:55:34.425326 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.425417 kubelet[2775]: E1104 23:55:34.425341 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.426063 kubelet[2775]: E1104 23:55:34.425798 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.426063 kubelet[2775]: W1104 23:55:34.425808 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.426063 kubelet[2775]: E1104 23:55:34.425819 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.428853 kubelet[2775]: E1104 23:55:34.428573 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.428853 kubelet[2775]: W1104 23:55:34.428594 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.428853 kubelet[2775]: E1104 23:55:34.428613 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.429787 kubelet[2775]: E1104 23:55:34.429767 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.429956 kubelet[2775]: W1104 23:55:34.429937 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.430279 kubelet[2775]: E1104 23:55:34.430186 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.432436 kubelet[2775]: E1104 23:55:34.432058 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.432981 kubelet[2775]: W1104 23:55:34.432921 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.432981 kubelet[2775]: E1104 23:55:34.432952 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.441035 kubelet[2775]: E1104 23:55:34.440979 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.441035 kubelet[2775]: W1104 23:55:34.441008 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.444868 kubelet[2775]: E1104 23:55:34.441275 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.445602 kubelet[2775]: E1104 23:55:34.445345 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.445602 kubelet[2775]: W1104 23:55:34.445370 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.445602 kubelet[2775]: E1104 23:55:34.445413 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.447070 kubelet[2775]: E1104 23:55:34.446506 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.447070 kubelet[2775]: W1104 23:55:34.446528 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.447070 kubelet[2775]: E1104 23:55:34.446567 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.448581 kubelet[2775]: E1104 23:55:34.448558 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.448765 kubelet[2775]: W1104 23:55:34.448741 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.450431 kubelet[2775]: E1104 23:55:34.450395 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.454259 kubelet[2775]: E1104 23:55:34.454132 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.454259 kubelet[2775]: W1104 23:55:34.454159 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.454259 kubelet[2775]: E1104 23:55:34.454205 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.456989 kubelet[2775]: E1104 23:55:34.455580 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:34.458775 containerd[1607]: time="2025-11-04T23:55:34.458705480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64dbc6fccb-8krqg,Uid:7df1fa2a-8fa4-4f60-a652-7af216af02a1,Namespace:calico-system,Attempt:0,}" Nov 4 23:55:34.514697 kubelet[2775]: E1104 23:55:34.513326 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.515001 kubelet[2775]: W1104 23:55:34.514922 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.515271 kubelet[2775]: E1104 23:55:34.514999 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.515576 kubelet[2775]: E1104 23:55:34.515451 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.515576 kubelet[2775]: W1104 23:55:34.515465 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.515576 kubelet[2775]: E1104 23:55:34.515479 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.515934 kubelet[2775]: E1104 23:55:34.515810 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.515934 kubelet[2775]: W1104 23:55:34.515822 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.515934 kubelet[2775]: E1104 23:55:34.515844 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.516074 kubelet[2775]: E1104 23:55:34.516065 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.516117 kubelet[2775]: W1104 23:55:34.516109 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.516172 kubelet[2775]: E1104 23:55:34.516164 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.516592 kubelet[2775]: E1104 23:55:34.516456 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.516592 kubelet[2775]: W1104 23:55:34.516468 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.516592 kubelet[2775]: E1104 23:55:34.516481 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.517712 kubelet[2775]: E1104 23:55:34.517687 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.517844 kubelet[2775]: W1104 23:55:34.517793 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.517969 kubelet[2775]: E1104 23:55:34.517899 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.519996 kubelet[2775]: E1104 23:55:34.519980 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.520226 kubelet[2775]: W1104 23:55:34.520083 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.520226 kubelet[2775]: E1104 23:55:34.520100 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.520449 kubelet[2775]: E1104 23:55:34.520412 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.520449 kubelet[2775]: W1104 23:55:34.520424 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.520618 kubelet[2775]: E1104 23:55:34.520434 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.520867 kubelet[2775]: E1104 23:55:34.520854 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.520937 kubelet[2775]: W1104 23:55:34.520926 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.521090 kubelet[2775]: E1104 23:55:34.521002 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.521194 kubelet[2775]: E1104 23:55:34.521183 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.521257 kubelet[2775]: W1104 23:55:34.521248 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.521331 kubelet[2775]: E1104 23:55:34.521319 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.522549 kubelet[2775]: E1104 23:55:34.522443 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.522549 kubelet[2775]: W1104 23:55:34.522456 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.522549 kubelet[2775]: E1104 23:55:34.522467 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.522957 kubelet[2775]: E1104 23:55:34.522905 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.522957 kubelet[2775]: W1104 23:55:34.522932 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.522957 kubelet[2775]: E1104 23:55:34.522944 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.523393 kubelet[2775]: E1104 23:55:34.523356 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.523393 kubelet[2775]: W1104 23:55:34.523367 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.523559 kubelet[2775]: E1104 23:55:34.523377 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.523809 kubelet[2775]: E1104 23:55:34.523797 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.524088 kubelet[2775]: W1104 23:55:34.524051 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.524088 kubelet[2775]: E1104 23:55:34.524069 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.525159 kubelet[2775]: E1104 23:55:34.525140 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.525159 kubelet[2775]: W1104 23:55:34.525159 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.525242 kubelet[2775]: E1104 23:55:34.525175 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.526003 kubelet[2775]: E1104 23:55:34.525983 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.526003 kubelet[2775]: W1104 23:55:34.526001 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.526742 kubelet[2775]: E1104 23:55:34.526017 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.526742 kubelet[2775]: E1104 23:55:34.526720 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.526742 kubelet[2775]: W1104 23:55:34.526734 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.526816 kubelet[2775]: E1104 23:55:34.526749 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.527863 kubelet[2775]: E1104 23:55:34.527007 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.527863 kubelet[2775]: W1104 23:55:34.527022 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.527863 kubelet[2775]: E1104 23:55:34.527058 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.527863 kubelet[2775]: E1104 23:55:34.527395 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.527863 kubelet[2775]: W1104 23:55:34.527407 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.527863 kubelet[2775]: E1104 23:55:34.527421 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.529146 kubelet[2775]: E1104 23:55:34.529128 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.529146 kubelet[2775]: W1104 23:55:34.529144 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.529288 kubelet[2775]: E1104 23:55:34.529159 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.529397 kubelet[2775]: E1104 23:55:34.529384 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.529397 kubelet[2775]: W1104 23:55:34.529397 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.529498 kubelet[2775]: E1104 23:55:34.529411 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.529616 kubelet[2775]: E1104 23:55:34.529603 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.529616 kubelet[2775]: W1104 23:55:34.529615 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.530035 kubelet[2775]: E1104 23:55:34.529626 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.530035 kubelet[2775]: E1104 23:55:34.529803 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.530035 kubelet[2775]: W1104 23:55:34.529817 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.530035 kubelet[2775]: E1104 23:55:34.529870 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.530772 kubelet[2775]: E1104 23:55:34.530729 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.530772 kubelet[2775]: W1104 23:55:34.530746 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.530772 kubelet[2775]: E1104 23:55:34.530760 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.531250 kubelet[2775]: E1104 23:55:34.531054 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.531250 kubelet[2775]: W1104 23:55:34.531068 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.531250 kubelet[2775]: E1104 23:55:34.531081 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.534907 containerd[1607]: time="2025-11-04T23:55:34.534711045Z" level=info msg="connecting to shim 19b1f0868f131e9bf234aa4ac81dc7fd433fb95fc66839c745cde264bc7fc636" address="unix:///run/containerd/s/7283aad1e5163bbb149981d6899d7ba71d3f75384a809be8b3e792b4921e9fdd" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:34.554995 kubelet[2775]: E1104 23:55:34.554718 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:34.554995 kubelet[2775]: W1104 23:55:34.554767 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:34.555597 kubelet[2775]: E1104 23:55:34.554794 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:34.570485 kubelet[2775]: E1104 23:55:34.570237 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:34.574062 containerd[1607]: time="2025-11-04T23:55:34.574006635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t2ljf,Uid:1bf977cf-6643-458a-8b0d-bbbccaaf2a1b,Namespace:calico-system,Attempt:0,}" Nov 4 23:55:34.579381 systemd[1]: Started cri-containerd-19b1f0868f131e9bf234aa4ac81dc7fd433fb95fc66839c745cde264bc7fc636.scope - libcontainer container 19b1f0868f131e9bf234aa4ac81dc7fd433fb95fc66839c745cde264bc7fc636. Nov 4 23:55:34.659655 containerd[1607]: time="2025-11-04T23:55:34.659393651Z" level=info msg="connecting to shim 2a23a2b24913b79ce669411b045c3c327476b0469fb04910eb59db7eb0a40766" address="unix:///run/containerd/s/4cc612472614abc91e59113e884cda9ba383df810965737b3b09208a1758c6ad" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:34.727399 systemd[1]: Started cri-containerd-2a23a2b24913b79ce669411b045c3c327476b0469fb04910eb59db7eb0a40766.scope - libcontainer container 2a23a2b24913b79ce669411b045c3c327476b0469fb04910eb59db7eb0a40766. Nov 4 23:55:34.816102 containerd[1607]: time="2025-11-04T23:55:34.815921498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64dbc6fccb-8krqg,Uid:7df1fa2a-8fa4-4f60-a652-7af216af02a1,Namespace:calico-system,Attempt:0,} returns sandbox id \"19b1f0868f131e9bf234aa4ac81dc7fd433fb95fc66839c745cde264bc7fc636\"" Nov 4 23:55:34.818986 kubelet[2775]: E1104 23:55:34.818506 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:34.824366 containerd[1607]: time="2025-11-04T23:55:34.823480104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 4 23:55:34.837257 containerd[1607]: time="2025-11-04T23:55:34.837205663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t2ljf,Uid:1bf977cf-6643-458a-8b0d-bbbccaaf2a1b,Namespace:calico-system,Attempt:0,} returns sandbox id \"2a23a2b24913b79ce669411b045c3c327476b0469fb04910eb59db7eb0a40766\"" Nov 4 23:55:34.840074 kubelet[2775]: E1104 23:55:34.840046 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:35.776575 kubelet[2775]: E1104 23:55:35.776492 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sppm" podUID="c9154d8d-6fa3-4eb3-9ec8-93848d59c99a" Nov 4 23:55:36.236271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3178959233.mount: Deactivated successfully. Nov 4 23:55:37.442003 containerd[1607]: time="2025-11-04T23:55:37.441155656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:37.442003 containerd[1607]: time="2025-11-04T23:55:37.441962324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 4 23:55:37.442585 containerd[1607]: time="2025-11-04T23:55:37.442421025Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:37.444228 containerd[1607]: time="2025-11-04T23:55:37.444174021Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:37.445190 containerd[1607]: time="2025-11-04T23:55:37.445157220Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.621637827s" Nov 4 23:55:37.445324 containerd[1607]: time="2025-11-04T23:55:37.445306448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 4 23:55:37.446741 containerd[1607]: time="2025-11-04T23:55:37.446715622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 4 23:55:37.470074 containerd[1607]: time="2025-11-04T23:55:37.470017023Z" level=info msg="CreateContainer within sandbox \"19b1f0868f131e9bf234aa4ac81dc7fd433fb95fc66839c745cde264bc7fc636\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 4 23:55:37.476471 containerd[1607]: time="2025-11-04T23:55:37.476334784Z" level=info msg="Container 8aac678fe088ac067045962192d756c34a3cb8972c45735ca02b558c2dfabb4b: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:55:37.501886 containerd[1607]: time="2025-11-04T23:55:37.501721226Z" level=info msg="CreateContainer within sandbox \"19b1f0868f131e9bf234aa4ac81dc7fd433fb95fc66839c745cde264bc7fc636\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8aac678fe088ac067045962192d756c34a3cb8972c45735ca02b558c2dfabb4b\"" Nov 4 23:55:37.504105 containerd[1607]: time="2025-11-04T23:55:37.502914355Z" level=info msg="StartContainer for \"8aac678fe088ac067045962192d756c34a3cb8972c45735ca02b558c2dfabb4b\"" Nov 4 23:55:37.504246 containerd[1607]: time="2025-11-04T23:55:37.504140946Z" level=info msg="connecting to shim 8aac678fe088ac067045962192d756c34a3cb8972c45735ca02b558c2dfabb4b" address="unix:///run/containerd/s/7283aad1e5163bbb149981d6899d7ba71d3f75384a809be8b3e792b4921e9fdd" protocol=ttrpc version=3 Nov 4 23:55:37.535208 systemd[1]: Started cri-containerd-8aac678fe088ac067045962192d756c34a3cb8972c45735ca02b558c2dfabb4b.scope - libcontainer container 8aac678fe088ac067045962192d756c34a3cb8972c45735ca02b558c2dfabb4b. Nov 4 23:55:37.608489 containerd[1607]: time="2025-11-04T23:55:37.608341959Z" level=info msg="StartContainer for \"8aac678fe088ac067045962192d756c34a3cb8972c45735ca02b558c2dfabb4b\" returns successfully" Nov 4 23:55:37.796976 kubelet[2775]: E1104 23:55:37.796110 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sppm" podUID="c9154d8d-6fa3-4eb3-9ec8-93848d59c99a" Nov 4 23:55:37.940194 kubelet[2775]: E1104 23:55:37.940150 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:38.026272 kubelet[2775]: E1104 23:55:38.026231 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.026272 kubelet[2775]: W1104 23:55:38.026259 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.027013 kubelet[2775]: E1104 23:55:38.026981 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.027416 kubelet[2775]: E1104 23:55:38.027397 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.027416 kubelet[2775]: W1104 23:55:38.027413 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.027871 kubelet[2775]: E1104 23:55:38.027432 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.028124 kubelet[2775]: E1104 23:55:38.028107 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.028124 kubelet[2775]: W1104 23:55:38.028122 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.028201 kubelet[2775]: E1104 23:55:38.028136 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.028509 kubelet[2775]: E1104 23:55:38.028493 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.028509 kubelet[2775]: W1104 23:55:38.028506 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.028575 kubelet[2775]: E1104 23:55:38.028518 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.029145 kubelet[2775]: E1104 23:55:38.029124 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.029145 kubelet[2775]: W1104 23:55:38.029139 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.029260 kubelet[2775]: E1104 23:55:38.029153 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.030029 kubelet[2775]: E1104 23:55:38.030002 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.030029 kubelet[2775]: W1104 23:55:38.030018 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.030029 kubelet[2775]: E1104 23:55:38.030032 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.030398 kubelet[2775]: E1104 23:55:38.030380 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.030398 kubelet[2775]: W1104 23:55:38.030395 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.030482 kubelet[2775]: E1104 23:55:38.030407 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.030800 kubelet[2775]: E1104 23:55:38.030780 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.030800 kubelet[2775]: W1104 23:55:38.030795 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.030904 kubelet[2775]: E1104 23:55:38.030807 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.031564 kubelet[2775]: E1104 23:55:38.031544 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.031564 kubelet[2775]: W1104 23:55:38.031559 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.031667 kubelet[2775]: E1104 23:55:38.031572 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.031817 kubelet[2775]: E1104 23:55:38.031801 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.031817 kubelet[2775]: W1104 23:55:38.031816 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.031931 kubelet[2775]: E1104 23:55:38.031849 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.032343 kubelet[2775]: E1104 23:55:38.032323 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.032343 kubelet[2775]: W1104 23:55:38.032337 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.032436 kubelet[2775]: E1104 23:55:38.032348 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.032611 kubelet[2775]: E1104 23:55:38.032597 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.032611 kubelet[2775]: W1104 23:55:38.032609 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.032683 kubelet[2775]: E1104 23:55:38.032621 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.033308 kubelet[2775]: E1104 23:55:38.033287 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.033308 kubelet[2775]: W1104 23:55:38.033302 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.033409 kubelet[2775]: E1104 23:55:38.033315 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.034149 kubelet[2775]: E1104 23:55:38.034120 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.034149 kubelet[2775]: W1104 23:55:38.034135 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.034149 kubelet[2775]: E1104 23:55:38.034149 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.034758 kubelet[2775]: E1104 23:55:38.034739 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.034758 kubelet[2775]: W1104 23:55:38.034755 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.034870 kubelet[2775]: E1104 23:55:38.034768 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.059806 kubelet[2775]: E1104 23:55:38.059429 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.059806 kubelet[2775]: W1104 23:55:38.059456 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.059806 kubelet[2775]: E1104 23:55:38.059481 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.059806 kubelet[2775]: E1104 23:55:38.059810 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.060033 kubelet[2775]: W1104 23:55:38.059820 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.060033 kubelet[2775]: E1104 23:55:38.059853 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.061037 kubelet[2775]: E1104 23:55:38.061008 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.061037 kubelet[2775]: W1104 23:55:38.061030 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.061168 kubelet[2775]: E1104 23:55:38.061048 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.062574 kubelet[2775]: E1104 23:55:38.062482 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.062574 kubelet[2775]: W1104 23:55:38.062502 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.062574 kubelet[2775]: E1104 23:55:38.062518 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.062807 kubelet[2775]: E1104 23:55:38.062756 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.062807 kubelet[2775]: W1104 23:55:38.062765 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.062807 kubelet[2775]: E1104 23:55:38.062777 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.063583 kubelet[2775]: E1104 23:55:38.063036 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.063583 kubelet[2775]: W1104 23:55:38.063047 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.063583 kubelet[2775]: E1104 23:55:38.063058 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.063691 kubelet[2775]: E1104 23:55:38.063593 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.063691 kubelet[2775]: W1104 23:55:38.063605 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.063691 kubelet[2775]: E1104 23:55:38.063617 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.063914 kubelet[2775]: E1104 23:55:38.063890 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.063914 kubelet[2775]: W1104 23:55:38.063904 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.063914 kubelet[2775]: E1104 23:55:38.063915 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.065304 kubelet[2775]: E1104 23:55:38.065276 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.065304 kubelet[2775]: W1104 23:55:38.065292 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.065304 kubelet[2775]: E1104 23:55:38.065307 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.066124 kubelet[2775]: E1104 23:55:38.066038 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.066124 kubelet[2775]: W1104 23:55:38.066049 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.066124 kubelet[2775]: E1104 23:55:38.066063 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.066589 kubelet[2775]: E1104 23:55:38.066331 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.066589 kubelet[2775]: W1104 23:55:38.066339 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.066589 kubelet[2775]: E1104 23:55:38.066350 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.066589 kubelet[2775]: E1104 23:55:38.066524 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.066589 kubelet[2775]: W1104 23:55:38.066531 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.066589 kubelet[2775]: E1104 23:55:38.066542 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.067281 kubelet[2775]: E1104 23:55:38.067255 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.067281 kubelet[2775]: W1104 23:55:38.067269 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.067281 kubelet[2775]: E1104 23:55:38.067282 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.068938 kubelet[2775]: E1104 23:55:38.068918 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.068938 kubelet[2775]: W1104 23:55:38.068933 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.069060 kubelet[2775]: E1104 23:55:38.068949 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.070188 kubelet[2775]: E1104 23:55:38.070168 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.070188 kubelet[2775]: W1104 23:55:38.070183 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.070302 kubelet[2775]: E1104 23:55:38.070195 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.070597 kubelet[2775]: E1104 23:55:38.070581 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.070597 kubelet[2775]: W1104 23:55:38.070595 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.070698 kubelet[2775]: E1104 23:55:38.070606 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.071449 kubelet[2775]: E1104 23:55:38.071205 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.071449 kubelet[2775]: W1104 23:55:38.071225 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.071449 kubelet[2775]: E1104 23:55:38.071240 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.071839 kubelet[2775]: E1104 23:55:38.071726 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.072016 kubelet[2775]: W1104 23:55:38.071998 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.072168 kubelet[2775]: E1104 23:55:38.072153 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.919240 containerd[1607]: time="2025-11-04T23:55:38.919188234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:38.920135 containerd[1607]: time="2025-11-04T23:55:38.919945842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 4 23:55:38.920774 containerd[1607]: time="2025-11-04T23:55:38.920687925Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:38.922614 containerd[1607]: time="2025-11-04T23:55:38.922579887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:38.923410 containerd[1607]: time="2025-11-04T23:55:38.923336007Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.475180757s" Nov 4 23:55:38.923410 containerd[1607]: time="2025-11-04T23:55:38.923369944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 4 23:55:38.939582 kubelet[2775]: I1104 23:55:38.939009 2775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 23:55:38.939582 kubelet[2775]: E1104 23:55:38.939470 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:38.941080 kubelet[2775]: E1104 23:55:38.941056 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.941120 kubelet[2775]: W1104 23:55:38.941082 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.941161 kubelet[2775]: E1104 23:55:38.941133 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.942965 kubelet[2775]: E1104 23:55:38.941552 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.942965 kubelet[2775]: W1104 23:55:38.941576 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.942965 kubelet[2775]: E1104 23:55:38.941619 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.942965 kubelet[2775]: E1104 23:55:38.941990 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.942965 kubelet[2775]: W1104 23:55:38.942030 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.942965 kubelet[2775]: E1104 23:55:38.942049 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.942965 kubelet[2775]: E1104 23:55:38.942365 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.942965 kubelet[2775]: W1104 23:55:38.942393 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.942965 kubelet[2775]: E1104 23:55:38.942412 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.942965 kubelet[2775]: E1104 23:55:38.942650 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.943350 kubelet[2775]: W1104 23:55:38.942661 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.943350 kubelet[2775]: E1104 23:55:38.942673 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.943842 kubelet[2775]: E1104 23:55:38.943800 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.943842 kubelet[2775]: W1104 23:55:38.943818 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.943953 kubelet[2775]: E1104 23:55:38.943853 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.944269 kubelet[2775]: E1104 23:55:38.944045 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.944269 kubelet[2775]: W1104 23:55:38.944059 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.944269 kubelet[2775]: E1104 23:55:38.944071 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.945941 kubelet[2775]: E1104 23:55:38.944665 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.945941 kubelet[2775]: W1104 23:55:38.944684 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.945941 kubelet[2775]: E1104 23:55:38.944698 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.945941 kubelet[2775]: E1104 23:55:38.945163 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.945941 kubelet[2775]: W1104 23:55:38.945174 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.945941 kubelet[2775]: E1104 23:55:38.945188 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.945941 kubelet[2775]: E1104 23:55:38.945557 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.945941 kubelet[2775]: W1104 23:55:38.945570 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.945941 kubelet[2775]: E1104 23:55:38.945586 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.946208 kubelet[2775]: E1104 23:55:38.945966 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.946208 kubelet[2775]: W1104 23:55:38.945981 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.946208 kubelet[2775]: E1104 23:55:38.945996 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.947756 kubelet[2775]: E1104 23:55:38.946359 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.947756 kubelet[2775]: W1104 23:55:38.946398 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.947756 kubelet[2775]: E1104 23:55:38.946416 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.947756 kubelet[2775]: E1104 23:55:38.946743 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.947756 kubelet[2775]: W1104 23:55:38.946755 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.947756 kubelet[2775]: E1104 23:55:38.946769 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.947756 kubelet[2775]: E1104 23:55:38.947125 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.947756 kubelet[2775]: W1104 23:55:38.947140 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.947756 kubelet[2775]: E1104 23:55:38.947156 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.947756 kubelet[2775]: E1104 23:55:38.947402 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.948085 kubelet[2775]: W1104 23:55:38.947443 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.948085 kubelet[2775]: E1104 23:55:38.947464 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.956564 containerd[1607]: time="2025-11-04T23:55:38.954461869Z" level=info msg="CreateContainer within sandbox \"2a23a2b24913b79ce669411b045c3c327476b0469fb04910eb59db7eb0a40766\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 4 23:55:38.967063 containerd[1607]: time="2025-11-04T23:55:38.967006318Z" level=info msg="Container d0e6aad6446358bb9525922572457a044c66359ec7536659e09c273ebbed2dc2: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:55:38.971778 kubelet[2775]: E1104 23:55:38.971621 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.971778 kubelet[2775]: W1104 23:55:38.971712 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.972578 kubelet[2775]: E1104 23:55:38.971740 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.974217 kubelet[2775]: E1104 23:55:38.974015 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.974217 kubelet[2775]: W1104 23:55:38.974108 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.974217 kubelet[2775]: E1104 23:55:38.974141 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.974658 kubelet[2775]: E1104 23:55:38.974618 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.974658 kubelet[2775]: W1104 23:55:38.974647 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.974786 kubelet[2775]: E1104 23:55:38.974673 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.974994 kubelet[2775]: E1104 23:55:38.974975 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.974994 kubelet[2775]: W1104 23:55:38.974994 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.975172 kubelet[2775]: E1104 23:55:38.975013 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.976384 kubelet[2775]: E1104 23:55:38.976362 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.976384 kubelet[2775]: W1104 23:55:38.976381 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.976534 kubelet[2775]: E1104 23:55:38.976398 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.976644 kubelet[2775]: E1104 23:55:38.976626 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.976644 kubelet[2775]: W1104 23:55:38.976643 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.976824 kubelet[2775]: E1104 23:55:38.976657 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.977027 kubelet[2775]: E1104 23:55:38.977007 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.977082 kubelet[2775]: W1104 23:55:38.977024 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.977082 kubelet[2775]: E1104 23:55:38.977040 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.977424 kubelet[2775]: E1104 23:55:38.977404 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.977424 kubelet[2775]: W1104 23:55:38.977420 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.977537 kubelet[2775]: E1104 23:55:38.977435 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.977803 kubelet[2775]: E1104 23:55:38.977783 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.977803 kubelet[2775]: W1104 23:55:38.977800 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.977972 kubelet[2775]: E1104 23:55:38.977814 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.979264 kubelet[2775]: E1104 23:55:38.979232 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.979264 kubelet[2775]: W1104 23:55:38.979252 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.979264 kubelet[2775]: E1104 23:55:38.979267 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.979543 kubelet[2775]: E1104 23:55:38.979527 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.979543 kubelet[2775]: W1104 23:55:38.979541 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.979633 kubelet[2775]: E1104 23:55:38.979554 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.980073 kubelet[2775]: E1104 23:55:38.980043 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.980073 kubelet[2775]: W1104 23:55:38.980062 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.980196 kubelet[2775]: E1104 23:55:38.980079 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.980488 kubelet[2775]: E1104 23:55:38.980470 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.980488 kubelet[2775]: W1104 23:55:38.980487 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.980618 kubelet[2775]: E1104 23:55:38.980522 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.980935 kubelet[2775]: E1104 23:55:38.980916 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.980935 kubelet[2775]: W1104 23:55:38.980957 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.981082 kubelet[2775]: E1104 23:55:38.981059 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.981626 kubelet[2775]: E1104 23:55:38.981606 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.981626 kubelet[2775]: W1104 23:55:38.981623 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.981626 kubelet[2775]: E1104 23:55:38.981639 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.983344 kubelet[2775]: E1104 23:55:38.983316 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.983344 kubelet[2775]: W1104 23:55:38.983338 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.983480 kubelet[2775]: E1104 23:55:38.983357 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.984189 kubelet[2775]: E1104 23:55:38.984022 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.984189 kubelet[2775]: W1104 23:55:38.984038 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.984189 kubelet[2775]: E1104 23:55:38.984074 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.984419 kubelet[2775]: E1104 23:55:38.984404 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:55:38.984487 kubelet[2775]: W1104 23:55:38.984476 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:55:38.984534 kubelet[2775]: E1104 23:55:38.984525 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:55:38.987314 containerd[1607]: time="2025-11-04T23:55:38.987202360Z" level=info msg="CreateContainer within sandbox \"2a23a2b24913b79ce669411b045c3c327476b0469fb04910eb59db7eb0a40766\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d0e6aad6446358bb9525922572457a044c66359ec7536659e09c273ebbed2dc2\"" Nov 4 23:55:38.989220 containerd[1607]: time="2025-11-04T23:55:38.989164577Z" level=info msg="StartContainer for \"d0e6aad6446358bb9525922572457a044c66359ec7536659e09c273ebbed2dc2\"" Nov 4 23:55:38.991846 containerd[1607]: time="2025-11-04T23:55:38.991766768Z" level=info msg="connecting to shim d0e6aad6446358bb9525922572457a044c66359ec7536659e09c273ebbed2dc2" address="unix:///run/containerd/s/4cc612472614abc91e59113e884cda9ba383df810965737b3b09208a1758c6ad" protocol=ttrpc version=3 Nov 4 23:55:39.021109 systemd[1]: Started cri-containerd-d0e6aad6446358bb9525922572457a044c66359ec7536659e09c273ebbed2dc2.scope - libcontainer container d0e6aad6446358bb9525922572457a044c66359ec7536659e09c273ebbed2dc2. Nov 4 23:55:39.081685 containerd[1607]: time="2025-11-04T23:55:39.081530527Z" level=info msg="StartContainer for \"d0e6aad6446358bb9525922572457a044c66359ec7536659e09c273ebbed2dc2\" returns successfully" Nov 4 23:55:39.098872 systemd[1]: cri-containerd-d0e6aad6446358bb9525922572457a044c66359ec7536659e09c273ebbed2dc2.scope: Deactivated successfully. Nov 4 23:55:39.134919 containerd[1607]: time="2025-11-04T23:55:39.134727132Z" level=info msg="received exit event container_id:\"d0e6aad6446358bb9525922572457a044c66359ec7536659e09c273ebbed2dc2\" id:\"d0e6aad6446358bb9525922572457a044c66359ec7536659e09c273ebbed2dc2\" pid:3487 exited_at:{seconds:1762300539 nanos:103799532}" Nov 4 23:55:39.164472 containerd[1607]: time="2025-11-04T23:55:39.164411497Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0e6aad6446358bb9525922572457a044c66359ec7536659e09c273ebbed2dc2\" id:\"d0e6aad6446358bb9525922572457a044c66359ec7536659e09c273ebbed2dc2\" pid:3487 exited_at:{seconds:1762300539 nanos:103799532}" Nov 4 23:55:39.201093 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0e6aad6446358bb9525922572457a044c66359ec7536659e09c273ebbed2dc2-rootfs.mount: Deactivated successfully. Nov 4 23:55:39.776494 kubelet[2775]: E1104 23:55:39.776051 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sppm" podUID="c9154d8d-6fa3-4eb3-9ec8-93848d59c99a" Nov 4 23:55:39.943057 kubelet[2775]: E1104 23:55:39.943020 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:39.944679 containerd[1607]: time="2025-11-04T23:55:39.944644809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 4 23:55:39.973074 kubelet[2775]: I1104 23:55:39.972696 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-64dbc6fccb-8krqg" podStartSLOduration=3.348543244 podStartE2EDuration="5.972676015s" podCreationTimestamp="2025-11-04 23:55:34 +0000 UTC" firstStartedPulling="2025-11-04 23:55:34.822160261 +0000 UTC m=+24.279083691" lastFinishedPulling="2025-11-04 23:55:37.446293026 +0000 UTC m=+26.903216462" observedRunningTime="2025-11-04 23:55:38.001494036 +0000 UTC m=+27.458417474" watchObservedRunningTime="2025-11-04 23:55:39.972676015 +0000 UTC m=+29.429599454" Nov 4 23:55:41.776139 kubelet[2775]: E1104 23:55:41.776057 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sppm" podUID="c9154d8d-6fa3-4eb3-9ec8-93848d59c99a" Nov 4 23:55:43.779460 kubelet[2775]: E1104 23:55:43.779344 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sppm" podUID="c9154d8d-6fa3-4eb3-9ec8-93848d59c99a" Nov 4 23:55:43.961265 containerd[1607]: time="2025-11-04T23:55:43.960398755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:43.961265 containerd[1607]: time="2025-11-04T23:55:43.961197800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 4 23:55:43.961906 containerd[1607]: time="2025-11-04T23:55:43.961877802Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:43.963723 containerd[1607]: time="2025-11-04T23:55:43.963691249Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:43.964598 containerd[1607]: time="2025-11-04T23:55:43.964560833Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.019880549s" Nov 4 23:55:43.964598 containerd[1607]: time="2025-11-04T23:55:43.964592925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 4 23:55:43.985660 containerd[1607]: time="2025-11-04T23:55:43.985597537Z" level=info msg="CreateContainer within sandbox \"2a23a2b24913b79ce669411b045c3c327476b0469fb04910eb59db7eb0a40766\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 4 23:55:44.003673 containerd[1607]: time="2025-11-04T23:55:44.003618005Z" level=info msg="Container 74da411aaa5aff53ebc3b44bd8983ec3489252f2612bc332c6715941edc5badd: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:55:44.017157 containerd[1607]: time="2025-11-04T23:55:44.017073942Z" level=info msg="CreateContainer within sandbox \"2a23a2b24913b79ce669411b045c3c327476b0469fb04910eb59db7eb0a40766\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"74da411aaa5aff53ebc3b44bd8983ec3489252f2612bc332c6715941edc5badd\"" Nov 4 23:55:44.018216 containerd[1607]: time="2025-11-04T23:55:44.018179291Z" level=info msg="StartContainer for \"74da411aaa5aff53ebc3b44bd8983ec3489252f2612bc332c6715941edc5badd\"" Nov 4 23:55:44.020244 containerd[1607]: time="2025-11-04T23:55:44.020185399Z" level=info msg="connecting to shim 74da411aaa5aff53ebc3b44bd8983ec3489252f2612bc332c6715941edc5badd" address="unix:///run/containerd/s/4cc612472614abc91e59113e884cda9ba383df810965737b3b09208a1758c6ad" protocol=ttrpc version=3 Nov 4 23:55:44.069096 systemd[1]: Started cri-containerd-74da411aaa5aff53ebc3b44bd8983ec3489252f2612bc332c6715941edc5badd.scope - libcontainer container 74da411aaa5aff53ebc3b44bd8983ec3489252f2612bc332c6715941edc5badd. Nov 4 23:55:44.130230 containerd[1607]: time="2025-11-04T23:55:44.130148783Z" level=info msg="StartContainer for \"74da411aaa5aff53ebc3b44bd8983ec3489252f2612bc332c6715941edc5badd\" returns successfully" Nov 4 23:55:44.737142 systemd[1]: cri-containerd-74da411aaa5aff53ebc3b44bd8983ec3489252f2612bc332c6715941edc5badd.scope: Deactivated successfully. Nov 4 23:55:44.737414 systemd[1]: cri-containerd-74da411aaa5aff53ebc3b44bd8983ec3489252f2612bc332c6715941edc5badd.scope: Consumed 637ms CPU time, 166.7M memory peak, 14.1M read from disk, 171.3M written to disk. Nov 4 23:55:44.742115 containerd[1607]: time="2025-11-04T23:55:44.742063501Z" level=info msg="received exit event container_id:\"74da411aaa5aff53ebc3b44bd8983ec3489252f2612bc332c6715941edc5badd\" id:\"74da411aaa5aff53ebc3b44bd8983ec3489252f2612bc332c6715941edc5badd\" pid:3546 exited_at:{seconds:1762300544 nanos:741514152}" Nov 4 23:55:44.754017 containerd[1607]: time="2025-11-04T23:55:44.753952703Z" level=info msg="TaskExit event in podsandbox handler container_id:\"74da411aaa5aff53ebc3b44bd8983ec3489252f2612bc332c6715941edc5badd\" id:\"74da411aaa5aff53ebc3b44bd8983ec3489252f2612bc332c6715941edc5badd\" pid:3546 exited_at:{seconds:1762300544 nanos:741514152}" Nov 4 23:55:44.817192 kubelet[2775]: I1104 23:55:44.816919 2775 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 4 23:55:44.926023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74da411aaa5aff53ebc3b44bd8983ec3489252f2612bc332c6715941edc5badd-rootfs.mount: Deactivated successfully. Nov 4 23:55:44.987634 systemd[1]: Created slice kubepods-burstable-podc8424ca1_9bdd_4f3c_ba8d_b16c31b9ab19.slice - libcontainer container kubepods-burstable-podc8424ca1_9bdd_4f3c_ba8d_b16c31b9ab19.slice. Nov 4 23:55:45.005326 systemd[1]: Created slice kubepods-burstable-podf648fe07_b491_4dfe_97d4_96bc7bd0b7c5.slice - libcontainer container kubepods-burstable-podf648fe07_b491_4dfe_97d4_96bc7bd0b7c5.slice. Nov 4 23:55:45.006140 kubelet[2775]: E1104 23:55:45.005648 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:45.010534 containerd[1607]: time="2025-11-04T23:55:45.010052410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 4 23:55:45.024102 kubelet[2775]: I1104 23:55:45.023989 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8424ca1-9bdd-4f3c-ba8d-b16c31b9ab19-config-volume\") pod \"coredns-674b8bbfcf-2gxn6\" (UID: \"c8424ca1-9bdd-4f3c-ba8d-b16c31b9ab19\") " pod="kube-system/coredns-674b8bbfcf-2gxn6" Nov 4 23:55:45.024102 kubelet[2775]: I1104 23:55:45.024057 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ae96bfe9-1b65-45cb-977e-23d44d98b741-goldmane-key-pair\") pod \"goldmane-666569f655-gjgtk\" (UID: \"ae96bfe9-1b65-45cb-977e-23d44d98b741\") " pod="calico-system/goldmane-666569f655-gjgtk" Nov 4 23:55:45.024596 kubelet[2775]: I1104 23:55:45.024536 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kgv7\" (UniqueName: \"kubernetes.io/projected/ae96bfe9-1b65-45cb-977e-23d44d98b741-kube-api-access-2kgv7\") pod \"goldmane-666569f655-gjgtk\" (UID: \"ae96bfe9-1b65-45cb-977e-23d44d98b741\") " pod="calico-system/goldmane-666569f655-gjgtk" Nov 4 23:55:45.024963 kubelet[2775]: I1104 23:55:45.024710 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/10e2f5c8-b4e1-4a46-95ef-00eee96dc698-whisker-backend-key-pair\") pod \"whisker-5dcd488696-jc8hr\" (UID: \"10e2f5c8-b4e1-4a46-95ef-00eee96dc698\") " pod="calico-system/whisker-5dcd488696-jc8hr" Nov 4 23:55:45.028953 kubelet[2775]: I1104 23:55:45.025822 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10e2f5c8-b4e1-4a46-95ef-00eee96dc698-whisker-ca-bundle\") pod \"whisker-5dcd488696-jc8hr\" (UID: \"10e2f5c8-b4e1-4a46-95ef-00eee96dc698\") " pod="calico-system/whisker-5dcd488696-jc8hr" Nov 4 23:55:45.028953 kubelet[2775]: I1104 23:55:45.025879 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2ae466a7-998d-43d9-8b5d-5ca3ee8d4af6-calico-apiserver-certs\") pod \"calico-apiserver-6fb7b7f48c-4gxxx\" (UID: \"2ae466a7-998d-43d9-8b5d-5ca3ee8d4af6\") " pod="calico-apiserver/calico-apiserver-6fb7b7f48c-4gxxx" Nov 4 23:55:45.028953 kubelet[2775]: I1104 23:55:45.025914 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f648fe07-b491-4dfe-97d4-96bc7bd0b7c5-config-volume\") pod \"coredns-674b8bbfcf-5kmzn\" (UID: \"f648fe07-b491-4dfe-97d4-96bc7bd0b7c5\") " pod="kube-system/coredns-674b8bbfcf-5kmzn" Nov 4 23:55:45.028953 kubelet[2775]: I1104 23:55:45.025940 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzcr7\" (UniqueName: \"kubernetes.io/projected/f648fe07-b491-4dfe-97d4-96bc7bd0b7c5-kube-api-access-fzcr7\") pod \"coredns-674b8bbfcf-5kmzn\" (UID: \"f648fe07-b491-4dfe-97d4-96bc7bd0b7c5\") " pod="kube-system/coredns-674b8bbfcf-5kmzn" Nov 4 23:55:45.028953 kubelet[2775]: I1104 23:55:45.025965 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3251e39b-c4eb-4874-a146-0948813f5507-calico-apiserver-certs\") pod \"calico-apiserver-6fb7b7f48c-cbmng\" (UID: \"3251e39b-c4eb-4874-a146-0948813f5507\") " pod="calico-apiserver/calico-apiserver-6fb7b7f48c-cbmng" Nov 4 23:55:45.026172 systemd[1]: Created slice kubepods-besteffort-pod3251e39b_c4eb_4874_a146_0948813f5507.slice - libcontainer container kubepods-besteffort-pod3251e39b_c4eb_4874_a146_0948813f5507.slice. Nov 4 23:55:45.029293 kubelet[2775]: I1104 23:55:45.025989 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9087d4ae-63b3-470b-8bf4-d4e7bf32985a-tigera-ca-bundle\") pod \"calico-kube-controllers-7d5bd6bf98-l6dql\" (UID: \"9087d4ae-63b3-470b-8bf4-d4e7bf32985a\") " pod="calico-system/calico-kube-controllers-7d5bd6bf98-l6dql" Nov 4 23:55:45.029293 kubelet[2775]: I1104 23:55:45.026023 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-528kc\" (UniqueName: \"kubernetes.io/projected/10e2f5c8-b4e1-4a46-95ef-00eee96dc698-kube-api-access-528kc\") pod \"whisker-5dcd488696-jc8hr\" (UID: \"10e2f5c8-b4e1-4a46-95ef-00eee96dc698\") " pod="calico-system/whisker-5dcd488696-jc8hr" Nov 4 23:55:45.029293 kubelet[2775]: I1104 23:55:45.026046 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae96bfe9-1b65-45cb-977e-23d44d98b741-goldmane-ca-bundle\") pod \"goldmane-666569f655-gjgtk\" (UID: \"ae96bfe9-1b65-45cb-977e-23d44d98b741\") " pod="calico-system/goldmane-666569f655-gjgtk" Nov 4 23:55:45.029293 kubelet[2775]: I1104 23:55:45.026070 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p75tg\" (UniqueName: \"kubernetes.io/projected/2ae466a7-998d-43d9-8b5d-5ca3ee8d4af6-kube-api-access-p75tg\") pod \"calico-apiserver-6fb7b7f48c-4gxxx\" (UID: \"2ae466a7-998d-43d9-8b5d-5ca3ee8d4af6\") " pod="calico-apiserver/calico-apiserver-6fb7b7f48c-4gxxx" Nov 4 23:55:45.029293 kubelet[2775]: I1104 23:55:45.026097 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2qtb\" (UniqueName: \"kubernetes.io/projected/c8424ca1-9bdd-4f3c-ba8d-b16c31b9ab19-kube-api-access-c2qtb\") pod \"coredns-674b8bbfcf-2gxn6\" (UID: \"c8424ca1-9bdd-4f3c-ba8d-b16c31b9ab19\") " pod="kube-system/coredns-674b8bbfcf-2gxn6" Nov 4 23:55:45.029431 kubelet[2775]: I1104 23:55:45.026115 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae96bfe9-1b65-45cb-977e-23d44d98b741-config\") pod \"goldmane-666569f655-gjgtk\" (UID: \"ae96bfe9-1b65-45cb-977e-23d44d98b741\") " pod="calico-system/goldmane-666569f655-gjgtk" Nov 4 23:55:45.029431 kubelet[2775]: I1104 23:55:45.026131 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7gnz\" (UniqueName: \"kubernetes.io/projected/3251e39b-c4eb-4874-a146-0948813f5507-kube-api-access-v7gnz\") pod \"calico-apiserver-6fb7b7f48c-cbmng\" (UID: \"3251e39b-c4eb-4874-a146-0948813f5507\") " pod="calico-apiserver/calico-apiserver-6fb7b7f48c-cbmng" Nov 4 23:55:45.029431 kubelet[2775]: I1104 23:55:45.026149 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsttl\" (UniqueName: \"kubernetes.io/projected/9087d4ae-63b3-470b-8bf4-d4e7bf32985a-kube-api-access-wsttl\") pod \"calico-kube-controllers-7d5bd6bf98-l6dql\" (UID: \"9087d4ae-63b3-470b-8bf4-d4e7bf32985a\") " pod="calico-system/calico-kube-controllers-7d5bd6bf98-l6dql" Nov 4 23:55:45.039005 systemd[1]: Created slice kubepods-besteffort-pod2ae466a7_998d_43d9_8b5d_5ca3ee8d4af6.slice - libcontainer container kubepods-besteffort-pod2ae466a7_998d_43d9_8b5d_5ca3ee8d4af6.slice. Nov 4 23:55:45.051504 systemd[1]: Created slice kubepods-besteffort-pod9087d4ae_63b3_470b_8bf4_d4e7bf32985a.slice - libcontainer container kubepods-besteffort-pod9087d4ae_63b3_470b_8bf4_d4e7bf32985a.slice. Nov 4 23:55:45.060384 systemd[1]: Created slice kubepods-besteffort-pod10e2f5c8_b4e1_4a46_95ef_00eee96dc698.slice - libcontainer container kubepods-besteffort-pod10e2f5c8_b4e1_4a46_95ef_00eee96dc698.slice. Nov 4 23:55:45.075128 systemd[1]: Created slice kubepods-besteffort-podae96bfe9_1b65_45cb_977e_23d44d98b741.slice - libcontainer container kubepods-besteffort-podae96bfe9_1b65_45cb_977e_23d44d98b741.slice. Nov 4 23:55:45.297677 kubelet[2775]: E1104 23:55:45.297403 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:45.307485 containerd[1607]: time="2025-11-04T23:55:45.307177968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2gxn6,Uid:c8424ca1-9bdd-4f3c-ba8d-b16c31b9ab19,Namespace:kube-system,Attempt:0,}" Nov 4 23:55:45.316114 kubelet[2775]: E1104 23:55:45.316048 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:45.317251 containerd[1607]: time="2025-11-04T23:55:45.317191790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5kmzn,Uid:f648fe07-b491-4dfe-97d4-96bc7bd0b7c5,Namespace:kube-system,Attempt:0,}" Nov 4 23:55:45.341229 containerd[1607]: time="2025-11-04T23:55:45.340955883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb7b7f48c-cbmng,Uid:3251e39b-c4eb-4874-a146-0948813f5507,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:55:45.358380 containerd[1607]: time="2025-11-04T23:55:45.358323603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb7b7f48c-4gxxx,Uid:2ae466a7-998d-43d9-8b5d-5ca3ee8d4af6,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:55:45.361206 containerd[1607]: time="2025-11-04T23:55:45.360889027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d5bd6bf98-l6dql,Uid:9087d4ae-63b3-470b-8bf4-d4e7bf32985a,Namespace:calico-system,Attempt:0,}" Nov 4 23:55:45.370042 containerd[1607]: time="2025-11-04T23:55:45.369982854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dcd488696-jc8hr,Uid:10e2f5c8-b4e1-4a46-95ef-00eee96dc698,Namespace:calico-system,Attempt:0,}" Nov 4 23:55:45.382843 containerd[1607]: time="2025-11-04T23:55:45.382783242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gjgtk,Uid:ae96bfe9-1b65-45cb-977e-23d44d98b741,Namespace:calico-system,Attempt:0,}" Nov 4 23:55:45.704245 containerd[1607]: time="2025-11-04T23:55:45.703980873Z" level=error msg="Failed to destroy network for sandbox \"bd2d7844c44335bd65ffffb3057b0cbaf724904c65ae34667a29eabf05224220\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:55:45.715751 containerd[1607]: time="2025-11-04T23:55:45.715621349Z" level=error msg="Failed to destroy network for sandbox \"41311a85f6b770ad8a41086b84c95ba4333ab58d0f94fdc8de85964fd0a1ddbb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:55:45.731081 containerd[1607]: time="2025-11-04T23:55:45.731010274Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gjgtk,Uid:ae96bfe9-1b65-45cb-977e-23d44d98b741,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"41311a85f6b770ad8a41086b84c95ba4333ab58d0f94fdc8de85964fd0a1ddbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:55:45.731414 containerd[1607]: time="2025-11-04T23:55:45.731303925Z" level=error msg="Failed to destroy network for sandbox \"d8c71757d2e4022e35fabf203ce4895a244ade9d1e8dddba8613b3a9520df342\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:55:45.733099 containerd[1607]: time="2025-11-04T23:55:45.733053382Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2gxn6,Uid:c8424ca1-9bdd-4f3c-ba8d-b16c31b9ab19,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8c71757d2e4022e35fabf203ce4895a244ade9d1e8dddba8613b3a9520df342\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:55:45.737869 containerd[1607]: time="2025-11-04T23:55:45.737613274Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb7b7f48c-cbmng,Uid:3251e39b-c4eb-4874-a146-0948813f5507,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd2d7844c44335bd65ffffb3057b0cbaf724904c65ae34667a29eabf05224220\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:55:45.737869 containerd[1607]: time="2025-11-04T23:55:45.737773170Z" level=error msg="Failed to destroy network for sandbox \"14ae4620d6e85a5aa6081828a65fde79a8ec491f5d6c130dbaac8b070ac12287\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:55:45.739683 containerd[1607]: time="2025-11-04T23:55:45.718291963Z" level=error msg="Failed to destroy network for sandbox \"f98d9ba531168509f61e101846dc08042b3666095c7552c8299220e631669bb0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:55:45.740038 containerd[1607]: time="2025-11-04T23:55:45.739811928Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dcd488696-jc8hr,Uid:10e2f5c8-b4e1-4a46-95ef-00eee96dc698,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"14ae4620d6e85a5aa6081828a65fde79a8ec491f5d6c130dbaac8b070ac12287\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:55:45.740142 containerd[1607]: time="2025-11-04T23:55:45.739913123Z" level=error msg="Failed to destroy network for sandbox \"3742fed1388bd62475a28ecc7fee7e31d1e23e6ea65ffbb05c7bcaaa3661362c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:55:45.740393 kubelet[2775]: E1104 23:55:45.740329 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41311a85f6b770ad8a41086b84c95ba4333ab58d0f94fdc8de85964fd0a1ddbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:55:45.740481 kubelet[2775]: E1104 23:55:45.740446 2775 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41311a85f6b770ad8a41086b84c95ba4333ab58d0f94fdc8de85964fd0a1ddbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-gjgtk" Nov 4 23:55:45.740525 kubelet[2775]: E1104 23:55:45.740482 2775 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41311a85f6b770ad8a41086b84c95ba4333ab58d0f94fdc8de85964fd0a1ddbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-gjgtk" Nov 4 23:55:45.740593 kubelet[2775]: E1104 23:55:45.740552 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-gjgtk_calico-system(ae96bfe9-1b65-45cb-977e-23d44d98b741)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-gjgtk_calico-system(ae96bfe9-1b65-45cb-977e-23d44d98b741)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41311a85f6b770ad8a41086b84c95ba4333ab58d0f94fdc8de85964fd0a1ddbb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-gjgtk" podUID="ae96bfe9-1b65-45cb-977e-23d44d98b741" Nov 4 23:55:45.741211 containerd[1607]: time="2025-11-04T23:55:45.740864911Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb7b7f48c-4gxxx,Uid:2ae466a7-998d-43d9-8b5d-5ca3ee8d4af6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f98d9ba531168509f61e101846dc08042b3666095c7552c8299220e631669bb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:55:45.741211 containerd[1607]: time="2025-11-04T23:55:45.718360270Z" level=error msg="Failed to destroy network for sandbox \"a7cf3dcf9290ad5afa8b6006016b189e86cd3e9936e90319bf4ff543503a3602\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:55:45.741354 kubelet[2775]: E1104 23:55:45.741159 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f98d9ba531168509f61e101846dc08042b3666095c7552c8299220e631669bb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:55:45.741354 kubelet[2775]: E1104 23:55:45.741277 2775 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f98d9ba531168509f61e101846dc08042b3666095c7552c8299220e631669bb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fb7b7f48c-4gxxx" Nov 4 23:55:45.741423 kubelet[2775]: E1104 23:55:45.741357 2775 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f98d9ba531168509f61e101846dc08042b3666095c7552c8299220e631669bb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fb7b7f48c-4gxxx" Nov 4 23:55:45.741489 containerd[1607]: time="2025-11-04T23:55:45.741446947Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d5bd6bf98-l6dql,Uid:9087d4ae-63b3-470b-8bf4-d4e7bf32985a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3742fed1388bd62475a28ecc7fee7e31d1e23e6ea65ffbb05c7bcaaa3661362c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:55:45.741552 kubelet[2775]: E1104 23:55:45.741522 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fb7b7f48c-4gxxx_calico-apiserver(2ae466a7-998d-43d9-8b5d-5ca3ee8d4af6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fb7b7f48c-4gxxx_calico-apiserver(2ae466a7-998d-43d9-8b5d-5ca3ee8d4af6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f98d9ba531168509f61e101846dc08042b3666095c7552c8299220e631669bb0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fb7b7f48c-4gxxx" podUID="2ae466a7-998d-43d9-8b5d-5ca3ee8d4af6" Nov 4 23:55:45.743237 kubelet[2775]: E1104 23:55:45.741782 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8c71757d2e4022e35fabf203ce4895a244ade9d1e8dddba8613b3a9520df342\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:55:45.743440 kubelet[2775]: E1104 23:55:45.743399 2775 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8c71757d2e4022e35fabf203ce4895a244ade9d1e8dddba8613b3a9520df342\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2gxn6" Nov 4 23:55:45.743735 kubelet[2775]: E1104 23:55:45.743522 2775 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8c71757d2e4022e35fabf203ce4895a244ade9d1e8dddba8613b3a9520df342\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2gxn6" Nov 4 23:55:45.743735 kubelet[2775]: E1104 23:55:45.743572 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-2gxn6_kube-system(c8424ca1-9bdd-4f3c-ba8d-b16c31b9ab19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-2gxn6_kube-system(c8424ca1-9bdd-4f3c-ba8d-b16c31b9ab19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8c71757d2e4022e35fabf203ce4895a244ade9d1e8dddba8613b3a9520df342\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2gxn6" podUID="c8424ca1-9bdd-4f3c-ba8d-b16c31b9ab19" Nov 4 23:55:45.743735 kubelet[2775]: E1104 23:55:45.741804 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd2d7844c44335bd65ffffb3057b0cbaf724904c65ae34667a29eabf05224220\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:55:45.743978 kubelet[2775]: E1104 23:55:45.743607 2775 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd2d7844c44335bd65ffffb3057b0cbaf724904c65ae34667a29eabf05224220\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fb7b7f48c-cbmng" Nov 4 23:55:45.743978 kubelet[2775]: E1104 23:55:45.742092 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3742fed1388bd62475a28ecc7fee7e31d1e23e6ea65ffbb05c7bcaaa3661362c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:55:45.743978 kubelet[2775]: E1104 23:55:45.743619 2775 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd2d7844c44335bd65ffffb3057b0cbaf724904c65ae34667a29eabf05224220\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fb7b7f48c-cbmng" Nov 4 23:55:45.744072 kubelet[2775]: E1104 23:55:45.743643 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fb7b7f48c-cbmng_calico-apiserver(3251e39b-c4eb-4874-a146-0948813f5507)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fb7b7f48c-cbmng_calico-apiserver(3251e39b-c4eb-4874-a146-0948813f5507)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd2d7844c44335bd65ffffb3057b0cbaf724904c65ae34667a29eabf05224220\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fb7b7f48c-cbmng" podUID="3251e39b-c4eb-4874-a146-0948813f5507" Nov 4 23:55:45.744072 kubelet[2775]: E1104 23:55:45.741819 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14ae4620d6e85a5aa6081828a65fde79a8ec491f5d6c130dbaac8b070ac12287\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:55:45.744072 kubelet[2775]: E1104 23:55:45.743658 2775 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3742fed1388bd62475a28ecc7fee7e31d1e23e6ea65ffbb05c7bcaaa3661362c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d5bd6bf98-l6dql" Nov 4 23:55:45.744169 kubelet[2775]: E1104 23:55:45.743678 2775 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14ae4620d6e85a5aa6081828a65fde79a8ec491f5d6c130dbaac8b070ac12287\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5dcd488696-jc8hr" Nov 4 23:55:45.744169 kubelet[2775]: E1104 23:55:45.743685 2775 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3742fed1388bd62475a28ecc7fee7e31d1e23e6ea65ffbb05c7bcaaa3661362c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d5bd6bf98-l6dql" Nov 4 23:55:45.744169 kubelet[2775]: E1104 23:55:45.743691 2775 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14ae4620d6e85a5aa6081828a65fde79a8ec491f5d6c130dbaac8b070ac12287\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5dcd488696-jc8hr" Nov 4 23:55:45.744248 kubelet[2775]: E1104 23:55:45.743712 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5dcd488696-jc8hr_calico-system(10e2f5c8-b4e1-4a46-95ef-00eee96dc698)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5dcd488696-jc8hr_calico-system(10e2f5c8-b4e1-4a46-95ef-00eee96dc698)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14ae4620d6e85a5aa6081828a65fde79a8ec491f5d6c130dbaac8b070ac12287\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5dcd488696-jc8hr" podUID="10e2f5c8-b4e1-4a46-95ef-00eee96dc698" Nov 4 23:55:45.744248 kubelet[2775]: E1104 23:55:45.743753 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d5bd6bf98-l6dql_calico-system(9087d4ae-63b3-470b-8bf4-d4e7bf32985a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d5bd6bf98-l6dql_calico-system(9087d4ae-63b3-470b-8bf4-d4e7bf32985a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3742fed1388bd62475a28ecc7fee7e31d1e23e6ea65ffbb05c7bcaaa3661362c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d5bd6bf98-l6dql" podUID="9087d4ae-63b3-470b-8bf4-d4e7bf32985a" Nov 4 23:55:45.744481 containerd[1607]: time="2025-11-04T23:55:45.744393190Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5kmzn,Uid:f648fe07-b491-4dfe-97d4-96bc7bd0b7c5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7cf3dcf9290ad5afa8b6006016b189e86cd3e9936e90319bf4ff543503a3602\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:55:45.745029 kubelet[2775]: E1104 23:55:45.744885 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7cf3dcf9290ad5afa8b6006016b189e86cd3e9936e90319bf4ff543503a3602\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:55:45.745029 kubelet[2775]: E1104 23:55:45.744947 2775 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7cf3dcf9290ad5afa8b6006016b189e86cd3e9936e90319bf4ff543503a3602\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5kmzn" Nov 4 23:55:45.745029 kubelet[2775]: E1104 23:55:45.744985 2775 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7cf3dcf9290ad5afa8b6006016b189e86cd3e9936e90319bf4ff543503a3602\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5kmzn" Nov 4 23:55:45.745146 kubelet[2775]: E1104 23:55:45.745043 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-5kmzn_kube-system(f648fe07-b491-4dfe-97d4-96bc7bd0b7c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-5kmzn_kube-system(f648fe07-b491-4dfe-97d4-96bc7bd0b7c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7cf3dcf9290ad5afa8b6006016b189e86cd3e9936e90319bf4ff543503a3602\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-5kmzn" podUID="f648fe07-b491-4dfe-97d4-96bc7bd0b7c5" Nov 4 23:55:45.782776 systemd[1]: Created slice kubepods-besteffort-podc9154d8d_6fa3_4eb3_9ec8_93848d59c99a.slice - libcontainer container kubepods-besteffort-podc9154d8d_6fa3_4eb3_9ec8_93848d59c99a.slice. Nov 4 23:55:45.785420 containerd[1607]: time="2025-11-04T23:55:45.785358877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8sppm,Uid:c9154d8d-6fa3-4eb3-9ec8-93848d59c99a,Namespace:calico-system,Attempt:0,}" Nov 4 23:55:45.857140 containerd[1607]: time="2025-11-04T23:55:45.857083193Z" level=error msg="Failed to destroy network for sandbox \"1c8524c225dd3731932c2b7be1662af7ae88304363e9edb438d525427552e8f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:55:45.858109 containerd[1607]: time="2025-11-04T23:55:45.858078449Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8sppm,Uid:c9154d8d-6fa3-4eb3-9ec8-93848d59c99a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c8524c225dd3731932c2b7be1662af7ae88304363e9edb438d525427552e8f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:55:45.858504 kubelet[2775]: E1104 23:55:45.858432 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c8524c225dd3731932c2b7be1662af7ae88304363e9edb438d525427552e8f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:55:45.859209 kubelet[2775]: E1104 23:55:45.858907 2775 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c8524c225dd3731932c2b7be1662af7ae88304363e9edb438d525427552e8f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8sppm" Nov 4 23:55:45.859209 kubelet[2775]: E1104 23:55:45.858951 2775 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c8524c225dd3731932c2b7be1662af7ae88304363e9edb438d525427552e8f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8sppm" Nov 4 23:55:45.859209 kubelet[2775]: E1104 23:55:45.859011 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8sppm_calico-system(c9154d8d-6fa3-4eb3-9ec8-93848d59c99a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8sppm_calico-system(c9154d8d-6fa3-4eb3-9ec8-93848d59c99a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c8524c225dd3731932c2b7be1662af7ae88304363e9edb438d525427552e8f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8sppm" podUID="c9154d8d-6fa3-4eb3-9ec8-93848d59c99a" Nov 4 23:55:50.900121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1956600262.mount: Deactivated successfully. Nov 4 23:55:51.147480 containerd[1607]: time="2025-11-04T23:55:51.139264958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 4 23:55:51.149038 containerd[1607]: time="2025-11-04T23:55:51.143624820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:51.172275 containerd[1607]: time="2025-11-04T23:55:51.172020823Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:51.212630 containerd[1607]: time="2025-11-04T23:55:51.212424585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:51.217159 containerd[1607]: time="2025-11-04T23:55:51.217056691Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.203200444s" Nov 4 23:55:51.217356 containerd[1607]: time="2025-11-04T23:55:51.217338539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 4 23:55:51.292052 containerd[1607]: time="2025-11-04T23:55:51.291156433Z" level=info msg="CreateContainer within sandbox \"2a23a2b24913b79ce669411b045c3c327476b0469fb04910eb59db7eb0a40766\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 4 23:55:51.334261 containerd[1607]: time="2025-11-04T23:55:51.334041190Z" level=info msg="Container d70b05d9336b385e964ea6b92a04e84f59963753645cd78a64d2e44d8476ee64: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:55:51.337914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount833455591.mount: Deactivated successfully. Nov 4 23:55:51.384095 containerd[1607]: time="2025-11-04T23:55:51.384036018Z" level=info msg="CreateContainer within sandbox \"2a23a2b24913b79ce669411b045c3c327476b0469fb04910eb59db7eb0a40766\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d70b05d9336b385e964ea6b92a04e84f59963753645cd78a64d2e44d8476ee64\"" Nov 4 23:55:51.385147 containerd[1607]: time="2025-11-04T23:55:51.385064999Z" level=info msg="StartContainer for \"d70b05d9336b385e964ea6b92a04e84f59963753645cd78a64d2e44d8476ee64\"" Nov 4 23:55:51.391037 containerd[1607]: time="2025-11-04T23:55:51.390994469Z" level=info msg="connecting to shim d70b05d9336b385e964ea6b92a04e84f59963753645cd78a64d2e44d8476ee64" address="unix:///run/containerd/s/4cc612472614abc91e59113e884cda9ba383df810965737b3b09208a1758c6ad" protocol=ttrpc version=3 Nov 4 23:55:51.539081 systemd[1]: Started cri-containerd-d70b05d9336b385e964ea6b92a04e84f59963753645cd78a64d2e44d8476ee64.scope - libcontainer container d70b05d9336b385e964ea6b92a04e84f59963753645cd78a64d2e44d8476ee64. Nov 4 23:55:51.651141 containerd[1607]: time="2025-11-04T23:55:51.651090282Z" level=info msg="StartContainer for \"d70b05d9336b385e964ea6b92a04e84f59963753645cd78a64d2e44d8476ee64\" returns successfully" Nov 4 23:55:51.797423 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 4 23:55:51.799095 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 4 23:55:52.087507 kubelet[2775]: I1104 23:55:52.087461 2775 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10e2f5c8-b4e1-4a46-95ef-00eee96dc698-whisker-ca-bundle\") pod \"10e2f5c8-b4e1-4a46-95ef-00eee96dc698\" (UID: \"10e2f5c8-b4e1-4a46-95ef-00eee96dc698\") " Nov 4 23:55:52.088398 kubelet[2775]: I1104 23:55:52.088206 2775 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-528kc\" (UniqueName: \"kubernetes.io/projected/10e2f5c8-b4e1-4a46-95ef-00eee96dc698-kube-api-access-528kc\") pod \"10e2f5c8-b4e1-4a46-95ef-00eee96dc698\" (UID: \"10e2f5c8-b4e1-4a46-95ef-00eee96dc698\") " Nov 4 23:55:52.088398 kubelet[2775]: I1104 23:55:52.088259 2775 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/10e2f5c8-b4e1-4a46-95ef-00eee96dc698-whisker-backend-key-pair\") pod \"10e2f5c8-b4e1-4a46-95ef-00eee96dc698\" (UID: \"10e2f5c8-b4e1-4a46-95ef-00eee96dc698\") " Nov 4 23:55:52.092501 kubelet[2775]: I1104 23:55:52.092429 2775 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10e2f5c8-b4e1-4a46-95ef-00eee96dc698-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "10e2f5c8-b4e1-4a46-95ef-00eee96dc698" (UID: "10e2f5c8-b4e1-4a46-95ef-00eee96dc698"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 23:55:52.101907 systemd[1]: var-lib-kubelet-pods-10e2f5c8\x2db4e1\x2d4a46\x2d95ef\x2d00eee96dc698-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 4 23:55:52.105805 kubelet[2775]: I1104 23:55:52.105106 2775 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10e2f5c8-b4e1-4a46-95ef-00eee96dc698-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "10e2f5c8-b4e1-4a46-95ef-00eee96dc698" (UID: "10e2f5c8-b4e1-4a46-95ef-00eee96dc698"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 4 23:55:52.110209 kubelet[2775]: I1104 23:55:52.110115 2775 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10e2f5c8-b4e1-4a46-95ef-00eee96dc698-kube-api-access-528kc" (OuterVolumeSpecName: "kube-api-access-528kc") pod "10e2f5c8-b4e1-4a46-95ef-00eee96dc698" (UID: "10e2f5c8-b4e1-4a46-95ef-00eee96dc698"). InnerVolumeSpecName "kube-api-access-528kc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 23:55:52.111478 systemd[1]: var-lib-kubelet-pods-10e2f5c8\x2db4e1\x2d4a46\x2d95ef\x2d00eee96dc698-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d528kc.mount: Deactivated successfully. Nov 4 23:55:52.165639 kubelet[2775]: E1104 23:55:52.165167 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:52.172041 systemd[1]: Removed slice kubepods-besteffort-pod10e2f5c8_b4e1_4a46_95ef_00eee96dc698.slice - libcontainer container kubepods-besteffort-pod10e2f5c8_b4e1_4a46_95ef_00eee96dc698.slice. Nov 4 23:55:52.189927 kubelet[2775]: I1104 23:55:52.189644 2775 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10e2f5c8-b4e1-4a46-95ef-00eee96dc698-whisker-ca-bundle\") on node \"ci-4487.0.0-n-936e1cfeba\" DevicePath \"\"" Nov 4 23:55:52.190174 kubelet[2775]: I1104 23:55:52.190117 2775 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-528kc\" (UniqueName: \"kubernetes.io/projected/10e2f5c8-b4e1-4a46-95ef-00eee96dc698-kube-api-access-528kc\") on node \"ci-4487.0.0-n-936e1cfeba\" DevicePath \"\"" Nov 4 23:55:52.190174 kubelet[2775]: I1104 23:55:52.190136 2775 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/10e2f5c8-b4e1-4a46-95ef-00eee96dc698-whisker-backend-key-pair\") on node \"ci-4487.0.0-n-936e1cfeba\" DevicePath \"\"" Nov 4 23:55:52.206428 kubelet[2775]: I1104 23:55:52.203870 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-t2ljf" podStartSLOduration=1.805281827 podStartE2EDuration="18.203851471s" podCreationTimestamp="2025-11-04 23:55:34 +0000 UTC" firstStartedPulling="2025-11-04 23:55:34.840960785 +0000 UTC m=+24.297884204" lastFinishedPulling="2025-11-04 23:55:51.239530431 +0000 UTC m=+40.696453848" observedRunningTime="2025-11-04 23:55:52.201279298 +0000 UTC m=+41.658202737" watchObservedRunningTime="2025-11-04 23:55:52.203851471 +0000 UTC m=+41.660774910" Nov 4 23:55:52.360814 systemd[1]: Created slice kubepods-besteffort-pod911d355e_45cf_436d_93f0_7eb9940b9506.slice - libcontainer container kubepods-besteffort-pod911d355e_45cf_436d_93f0_7eb9940b9506.slice. Nov 4 23:55:52.393861 kubelet[2775]: I1104 23:55:52.391983 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lckqj\" (UniqueName: \"kubernetes.io/projected/911d355e-45cf-436d-93f0-7eb9940b9506-kube-api-access-lckqj\") pod \"whisker-7ff6944958-gl9d8\" (UID: \"911d355e-45cf-436d-93f0-7eb9940b9506\") " pod="calico-system/whisker-7ff6944958-gl9d8" Nov 4 23:55:52.394270 kubelet[2775]: I1104 23:55:52.394203 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/911d355e-45cf-436d-93f0-7eb9940b9506-whisker-backend-key-pair\") pod \"whisker-7ff6944958-gl9d8\" (UID: \"911d355e-45cf-436d-93f0-7eb9940b9506\") " pod="calico-system/whisker-7ff6944958-gl9d8" Nov 4 23:55:52.394469 kubelet[2775]: I1104 23:55:52.394381 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/911d355e-45cf-436d-93f0-7eb9940b9506-whisker-ca-bundle\") pod \"whisker-7ff6944958-gl9d8\" (UID: \"911d355e-45cf-436d-93f0-7eb9940b9506\") " pod="calico-system/whisker-7ff6944958-gl9d8" Nov 4 23:55:52.581893 containerd[1607]: time="2025-11-04T23:55:52.580530683Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d70b05d9336b385e964ea6b92a04e84f59963753645cd78a64d2e44d8476ee64\" id:\"99a167b38c2fdc6aa3dada074bd68c0bc401f77e8971aa6f6ddb8b24679295f7\" pid:3867 exit_status:1 exited_at:{seconds:1762300552 nanos:540649871}" Nov 4 23:55:52.673766 containerd[1607]: time="2025-11-04T23:55:52.673614463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7ff6944958-gl9d8,Uid:911d355e-45cf-436d-93f0-7eb9940b9506,Namespace:calico-system,Attempt:0,}" Nov 4 23:55:52.782380 kubelet[2775]: I1104 23:55:52.782280 2775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10e2f5c8-b4e1-4a46-95ef-00eee96dc698" path="/var/lib/kubelet/pods/10e2f5c8-b4e1-4a46-95ef-00eee96dc698/volumes" Nov 4 23:55:53.110706 systemd-networkd[1483]: cali4bc5c1560a8: Link UP Nov 4 23:55:53.115964 systemd-networkd[1483]: cali4bc5c1560a8: Gained carrier Nov 4 23:55:53.129271 containerd[1607]: 2025-11-04 23:55:52.737 [INFO][3891] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 23:55:53.129271 containerd[1607]: 2025-11-04 23:55:52.779 [INFO][3891] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--n--936e1cfeba-k8s-whisker--7ff6944958--gl9d8-eth0 whisker-7ff6944958- calico-system 911d355e-45cf-436d-93f0-7eb9940b9506 918 0 2025-11-04 23:55:52 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7ff6944958 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4487.0.0-n-936e1cfeba whisker-7ff6944958-gl9d8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4bc5c1560a8 [] [] }} ContainerID="99e4433aec91ccf2562e46ae6d72dbe1112adcaf9510f1d0bcb0edb3f016ce74" Namespace="calico-system" Pod="whisker-7ff6944958-gl9d8" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-whisker--7ff6944958--gl9d8-" Nov 4 23:55:53.129271 containerd[1607]: 2025-11-04 23:55:52.779 [INFO][3891] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="99e4433aec91ccf2562e46ae6d72dbe1112adcaf9510f1d0bcb0edb3f016ce74" Namespace="calico-system" Pod="whisker-7ff6944958-gl9d8" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-whisker--7ff6944958--gl9d8-eth0" Nov 4 23:55:53.129271 containerd[1607]: 2025-11-04 23:55:53.018 [INFO][3903] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99e4433aec91ccf2562e46ae6d72dbe1112adcaf9510f1d0bcb0edb3f016ce74" HandleID="k8s-pod-network.99e4433aec91ccf2562e46ae6d72dbe1112adcaf9510f1d0bcb0edb3f016ce74" Workload="ci--4487.0.0--n--936e1cfeba-k8s-whisker--7ff6944958--gl9d8-eth0" Nov 4 23:55:53.129271 containerd[1607]: 2025-11-04 23:55:53.021 [INFO][3903] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="99e4433aec91ccf2562e46ae6d72dbe1112adcaf9510f1d0bcb0edb3f016ce74" HandleID="k8s-pod-network.99e4433aec91ccf2562e46ae6d72dbe1112adcaf9510f1d0bcb0edb3f016ce74" Workload="ci--4487.0.0--n--936e1cfeba-k8s-whisker--7ff6944958--gl9d8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00026d8c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.0-n-936e1cfeba", "pod":"whisker-7ff6944958-gl9d8", "timestamp":"2025-11-04 23:55:53.018009747 +0000 UTC"}, Hostname:"ci-4487.0.0-n-936e1cfeba", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:55:53.129271 containerd[1607]: 2025-11-04 23:55:53.021 [INFO][3903] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:55:53.129271 containerd[1607]: 2025-11-04 23:55:53.022 [INFO][3903] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:55:53.129271 containerd[1607]: 2025-11-04 23:55:53.022 [INFO][3903] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-n-936e1cfeba' Nov 4 23:55:53.129271 containerd[1607]: 2025-11-04 23:55:53.039 [INFO][3903] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.99e4433aec91ccf2562e46ae6d72dbe1112adcaf9510f1d0bcb0edb3f016ce74" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:53.129271 containerd[1607]: 2025-11-04 23:55:53.060 [INFO][3903] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:53.129271 containerd[1607]: 2025-11-04 23:55:53.066 [INFO][3903] ipam/ipam.go 511: Trying affinity for 192.168.64.0/26 host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:53.129271 containerd[1607]: 2025-11-04 23:55:53.070 [INFO][3903] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.0/26 host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:53.129271 containerd[1607]: 2025-11-04 23:55:53.073 [INFO][3903] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.0/26 host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:53.129271 containerd[1607]: 2025-11-04 23:55:53.073 [INFO][3903] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.0/26 handle="k8s-pod-network.99e4433aec91ccf2562e46ae6d72dbe1112adcaf9510f1d0bcb0edb3f016ce74" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:53.129271 containerd[1607]: 2025-11-04 23:55:53.076 [INFO][3903] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.99e4433aec91ccf2562e46ae6d72dbe1112adcaf9510f1d0bcb0edb3f016ce74 Nov 4 23:55:53.129271 containerd[1607]: 2025-11-04 23:55:53.082 [INFO][3903] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.0/26 handle="k8s-pod-network.99e4433aec91ccf2562e46ae6d72dbe1112adcaf9510f1d0bcb0edb3f016ce74" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:53.129271 containerd[1607]: 2025-11-04 23:55:53.089 [INFO][3903] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.1/26] block=192.168.64.0/26 handle="k8s-pod-network.99e4433aec91ccf2562e46ae6d72dbe1112adcaf9510f1d0bcb0edb3f016ce74" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:53.129271 containerd[1607]: 2025-11-04 23:55:53.089 [INFO][3903] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.1/26] handle="k8s-pod-network.99e4433aec91ccf2562e46ae6d72dbe1112adcaf9510f1d0bcb0edb3f016ce74" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:53.129271 containerd[1607]: 2025-11-04 23:55:53.090 [INFO][3903] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:55:53.129271 containerd[1607]: 2025-11-04 23:55:53.090 [INFO][3903] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.1/26] IPv6=[] ContainerID="99e4433aec91ccf2562e46ae6d72dbe1112adcaf9510f1d0bcb0edb3f016ce74" HandleID="k8s-pod-network.99e4433aec91ccf2562e46ae6d72dbe1112adcaf9510f1d0bcb0edb3f016ce74" Workload="ci--4487.0.0--n--936e1cfeba-k8s-whisker--7ff6944958--gl9d8-eth0" Nov 4 23:55:53.130385 containerd[1607]: 2025-11-04 23:55:53.093 [INFO][3891] cni-plugin/k8s.go 418: Populated endpoint ContainerID="99e4433aec91ccf2562e46ae6d72dbe1112adcaf9510f1d0bcb0edb3f016ce74" Namespace="calico-system" Pod="whisker-7ff6944958-gl9d8" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-whisker--7ff6944958--gl9d8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--936e1cfeba-k8s-whisker--7ff6944958--gl9d8-eth0", GenerateName:"whisker-7ff6944958-", Namespace:"calico-system", SelfLink:"", UID:"911d355e-45cf-436d-93f0-7eb9940b9506", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 55, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7ff6944958", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-936e1cfeba", ContainerID:"", Pod:"whisker-7ff6944958-gl9d8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.64.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4bc5c1560a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:55:53.130385 containerd[1607]: 2025-11-04 23:55:53.094 [INFO][3891] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.1/32] ContainerID="99e4433aec91ccf2562e46ae6d72dbe1112adcaf9510f1d0bcb0edb3f016ce74" Namespace="calico-system" Pod="whisker-7ff6944958-gl9d8" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-whisker--7ff6944958--gl9d8-eth0" Nov 4 23:55:53.130385 containerd[1607]: 2025-11-04 23:55:53.094 [INFO][3891] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4bc5c1560a8 ContainerID="99e4433aec91ccf2562e46ae6d72dbe1112adcaf9510f1d0bcb0edb3f016ce74" Namespace="calico-system" Pod="whisker-7ff6944958-gl9d8" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-whisker--7ff6944958--gl9d8-eth0" Nov 4 23:55:53.130385 containerd[1607]: 2025-11-04 23:55:53.108 [INFO][3891] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="99e4433aec91ccf2562e46ae6d72dbe1112adcaf9510f1d0bcb0edb3f016ce74" Namespace="calico-system" Pod="whisker-7ff6944958-gl9d8" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-whisker--7ff6944958--gl9d8-eth0" Nov 4 23:55:53.130385 containerd[1607]: 2025-11-04 23:55:53.110 [INFO][3891] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="99e4433aec91ccf2562e46ae6d72dbe1112adcaf9510f1d0bcb0edb3f016ce74" Namespace="calico-system" Pod="whisker-7ff6944958-gl9d8" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-whisker--7ff6944958--gl9d8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--936e1cfeba-k8s-whisker--7ff6944958--gl9d8-eth0", GenerateName:"whisker-7ff6944958-", Namespace:"calico-system", SelfLink:"", UID:"911d355e-45cf-436d-93f0-7eb9940b9506", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 55, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7ff6944958", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-936e1cfeba", ContainerID:"99e4433aec91ccf2562e46ae6d72dbe1112adcaf9510f1d0bcb0edb3f016ce74", Pod:"whisker-7ff6944958-gl9d8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.64.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4bc5c1560a8", MAC:"c2:a3:06:2e:2b:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:55:53.130385 containerd[1607]: 2025-11-04 23:55:53.123 [INFO][3891] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="99e4433aec91ccf2562e46ae6d72dbe1112adcaf9510f1d0bcb0edb3f016ce74" Namespace="calico-system" Pod="whisker-7ff6944958-gl9d8" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-whisker--7ff6944958--gl9d8-eth0" Nov 4 23:55:53.166570 kubelet[2775]: E1104 23:55:53.166539 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:53.309178 containerd[1607]: time="2025-11-04T23:55:53.308888108Z" level=info msg="connecting to shim 99e4433aec91ccf2562e46ae6d72dbe1112adcaf9510f1d0bcb0edb3f016ce74" address="unix:///run/containerd/s/3ae0dcac3c59a3a72a7a02bc48a70b10594a435a0ea168073b5392da337446db" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:53.317526 containerd[1607]: time="2025-11-04T23:55:53.317486371Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d70b05d9336b385e964ea6b92a04e84f59963753645cd78a64d2e44d8476ee64\" id:\"f66a9daaae99a4359c6ab38fca9347e0a9822bd96cf61677775219fac4e7bab8\" pid:3930 exit_status:1 exited_at:{seconds:1762300553 nanos:316989202}" Nov 4 23:55:53.349116 systemd[1]: Started cri-containerd-99e4433aec91ccf2562e46ae6d72dbe1112adcaf9510f1d0bcb0edb3f016ce74.scope - libcontainer container 99e4433aec91ccf2562e46ae6d72dbe1112adcaf9510f1d0bcb0edb3f016ce74. Nov 4 23:55:53.406118 containerd[1607]: time="2025-11-04T23:55:53.406056672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7ff6944958-gl9d8,Uid:911d355e-45cf-436d-93f0-7eb9940b9506,Namespace:calico-system,Attempt:0,} returns sandbox id \"99e4433aec91ccf2562e46ae6d72dbe1112adcaf9510f1d0bcb0edb3f016ce74\"" Nov 4 23:55:53.409359 containerd[1607]: time="2025-11-04T23:55:53.409207848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 23:55:53.763815 containerd[1607]: time="2025-11-04T23:55:53.763511901Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:53.767236 containerd[1607]: time="2025-11-04T23:55:53.764801123Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 23:55:53.767236 containerd[1607]: time="2025-11-04T23:55:53.765387049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 4 23:55:53.767772 kubelet[2775]: E1104 23:55:53.767724 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:55:53.769214 kubelet[2775]: E1104 23:55:53.767786 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:55:53.774111 kubelet[2775]: E1104 23:55:53.774048 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bd3b3f069a2d4d429e6529ab052ead9b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lckqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7ff6944958-gl9d8_calico-system(911d355e-45cf-436d-93f0-7eb9940b9506): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:53.776730 containerd[1607]: time="2025-11-04T23:55:53.776478034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 23:55:54.125646 containerd[1607]: time="2025-11-04T23:55:54.125551652Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:54.126531 containerd[1607]: time="2025-11-04T23:55:54.126435265Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 23:55:54.126531 containerd[1607]: time="2025-11-04T23:55:54.126487013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 4 23:55:54.127061 kubelet[2775]: E1104 23:55:54.127004 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:55:54.127143 kubelet[2775]: E1104 23:55:54.127070 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:55:54.127263 kubelet[2775]: E1104 23:55:54.127203 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lckqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7ff6944958-gl9d8_calico-system(911d355e-45cf-436d-93f0-7eb9940b9506): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:54.128432 kubelet[2775]: E1104 23:55:54.128387 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7ff6944958-gl9d8" podUID="911d355e-45cf-436d-93f0-7eb9940b9506" Nov 4 23:55:54.171440 kubelet[2775]: E1104 23:55:54.171392 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7ff6944958-gl9d8" podUID="911d355e-45cf-436d-93f0-7eb9940b9506" Nov 4 23:55:54.307109 systemd-networkd[1483]: cali4bc5c1560a8: Gained IPv6LL Nov 4 23:55:55.177328 kubelet[2775]: E1104 23:55:55.177232 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7ff6944958-gl9d8" podUID="911d355e-45cf-436d-93f0-7eb9940b9506" Nov 4 23:55:55.509867 kubelet[2775]: I1104 23:55:55.509659 2775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 23:55:55.510678 kubelet[2775]: E1104 23:55:55.510128 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:56.177430 kubelet[2775]: E1104 23:55:56.176953 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:56.665630 systemd-networkd[1483]: vxlan.calico: Link UP Nov 4 23:55:56.665645 systemd-networkd[1483]: vxlan.calico: Gained carrier Nov 4 23:55:56.778937 containerd[1607]: time="2025-11-04T23:55:56.778874708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8sppm,Uid:c9154d8d-6fa3-4eb3-9ec8-93848d59c99a,Namespace:calico-system,Attempt:0,}" Nov 4 23:55:56.779759 containerd[1607]: time="2025-11-04T23:55:56.779700520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d5bd6bf98-l6dql,Uid:9087d4ae-63b3-470b-8bf4-d4e7bf32985a,Namespace:calico-system,Attempt:0,}" Nov 4 23:55:57.003659 systemd-networkd[1483]: caliec2f83339c4: Link UP Nov 4 23:55:57.004705 systemd-networkd[1483]: caliec2f83339c4: Gained carrier Nov 4 23:55:57.034858 containerd[1607]: 2025-11-04 23:55:56.871 [INFO][4200] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--n--936e1cfeba-k8s-csi--node--driver--8sppm-eth0 csi-node-driver- calico-system c9154d8d-6fa3-4eb3-9ec8-93848d59c99a 734 0 2025-11-04 23:55:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4487.0.0-n-936e1cfeba csi-node-driver-8sppm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliec2f83339c4 [] [] }} ContainerID="93acdbcb333f8a93ebb5fe99a8dd6df7c610210c340000c55f1efed38c55fe61" Namespace="calico-system" Pod="csi-node-driver-8sppm" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-csi--node--driver--8sppm-" Nov 4 23:55:57.034858 containerd[1607]: 2025-11-04 23:55:56.872 [INFO][4200] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="93acdbcb333f8a93ebb5fe99a8dd6df7c610210c340000c55f1efed38c55fe61" Namespace="calico-system" Pod="csi-node-driver-8sppm" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-csi--node--driver--8sppm-eth0" Nov 4 23:55:57.034858 containerd[1607]: 2025-11-04 23:55:56.931 [INFO][4223] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93acdbcb333f8a93ebb5fe99a8dd6df7c610210c340000c55f1efed38c55fe61" HandleID="k8s-pod-network.93acdbcb333f8a93ebb5fe99a8dd6df7c610210c340000c55f1efed38c55fe61" Workload="ci--4487.0.0--n--936e1cfeba-k8s-csi--node--driver--8sppm-eth0" Nov 4 23:55:57.034858 containerd[1607]: 2025-11-04 23:55:56.933 [INFO][4223] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="93acdbcb333f8a93ebb5fe99a8dd6df7c610210c340000c55f1efed38c55fe61" HandleID="k8s-pod-network.93acdbcb333f8a93ebb5fe99a8dd6df7c610210c340000c55f1efed38c55fe61" Workload="ci--4487.0.0--n--936e1cfeba-k8s-csi--node--driver--8sppm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5870), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.0-n-936e1cfeba", "pod":"csi-node-driver-8sppm", "timestamp":"2025-11-04 23:55:56.931793614 +0000 UTC"}, Hostname:"ci-4487.0.0-n-936e1cfeba", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:55:57.034858 containerd[1607]: 2025-11-04 23:55:56.933 [INFO][4223] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:55:57.034858 containerd[1607]: 2025-11-04 23:55:56.933 [INFO][4223] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:55:57.034858 containerd[1607]: 2025-11-04 23:55:56.933 [INFO][4223] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-n-936e1cfeba' Nov 4 23:55:57.034858 containerd[1607]: 2025-11-04 23:55:56.942 [INFO][4223] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.93acdbcb333f8a93ebb5fe99a8dd6df7c610210c340000c55f1efed38c55fe61" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:57.034858 containerd[1607]: 2025-11-04 23:55:56.958 [INFO][4223] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:57.034858 containerd[1607]: 2025-11-04 23:55:56.966 [INFO][4223] ipam/ipam.go 511: Trying affinity for 192.168.64.0/26 host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:57.034858 containerd[1607]: 2025-11-04 23:55:56.968 [INFO][4223] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.0/26 host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:57.034858 containerd[1607]: 2025-11-04 23:55:56.974 [INFO][4223] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.0/26 host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:57.034858 containerd[1607]: 2025-11-04 23:55:56.974 [INFO][4223] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.0/26 handle="k8s-pod-network.93acdbcb333f8a93ebb5fe99a8dd6df7c610210c340000c55f1efed38c55fe61" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:57.034858 containerd[1607]: 2025-11-04 23:55:56.976 [INFO][4223] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.93acdbcb333f8a93ebb5fe99a8dd6df7c610210c340000c55f1efed38c55fe61 Nov 4 23:55:57.034858 containerd[1607]: 2025-11-04 23:55:56.980 [INFO][4223] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.0/26 handle="k8s-pod-network.93acdbcb333f8a93ebb5fe99a8dd6df7c610210c340000c55f1efed38c55fe61" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:57.034858 containerd[1607]: 2025-11-04 23:55:56.989 [INFO][4223] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.2/26] block=192.168.64.0/26 handle="k8s-pod-network.93acdbcb333f8a93ebb5fe99a8dd6df7c610210c340000c55f1efed38c55fe61" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:57.034858 containerd[1607]: 2025-11-04 23:55:56.989 [INFO][4223] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.2/26] handle="k8s-pod-network.93acdbcb333f8a93ebb5fe99a8dd6df7c610210c340000c55f1efed38c55fe61" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:57.034858 containerd[1607]: 2025-11-04 23:55:56.990 [INFO][4223] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:55:57.034858 containerd[1607]: 2025-11-04 23:55:56.990 [INFO][4223] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.2/26] IPv6=[] ContainerID="93acdbcb333f8a93ebb5fe99a8dd6df7c610210c340000c55f1efed38c55fe61" HandleID="k8s-pod-network.93acdbcb333f8a93ebb5fe99a8dd6df7c610210c340000c55f1efed38c55fe61" Workload="ci--4487.0.0--n--936e1cfeba-k8s-csi--node--driver--8sppm-eth0" Nov 4 23:55:57.035565 containerd[1607]: 2025-11-04 23:55:56.997 [INFO][4200] cni-plugin/k8s.go 418: Populated endpoint ContainerID="93acdbcb333f8a93ebb5fe99a8dd6df7c610210c340000c55f1efed38c55fe61" Namespace="calico-system" Pod="csi-node-driver-8sppm" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-csi--node--driver--8sppm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--936e1cfeba-k8s-csi--node--driver--8sppm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c9154d8d-6fa3-4eb3-9ec8-93848d59c99a", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 55, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-936e1cfeba", ContainerID:"", Pod:"csi-node-driver-8sppm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliec2f83339c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:55:57.035565 containerd[1607]: 2025-11-04 23:55:56.997 [INFO][4200] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.2/32] ContainerID="93acdbcb333f8a93ebb5fe99a8dd6df7c610210c340000c55f1efed38c55fe61" Namespace="calico-system" Pod="csi-node-driver-8sppm" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-csi--node--driver--8sppm-eth0" Nov 4 23:55:57.035565 containerd[1607]: 2025-11-04 23:55:56.997 [INFO][4200] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliec2f83339c4 ContainerID="93acdbcb333f8a93ebb5fe99a8dd6df7c610210c340000c55f1efed38c55fe61" Namespace="calico-system" Pod="csi-node-driver-8sppm" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-csi--node--driver--8sppm-eth0" Nov 4 23:55:57.035565 containerd[1607]: 2025-11-04 23:55:57.010 [INFO][4200] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93acdbcb333f8a93ebb5fe99a8dd6df7c610210c340000c55f1efed38c55fe61" Namespace="calico-system" Pod="csi-node-driver-8sppm" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-csi--node--driver--8sppm-eth0" Nov 4 23:55:57.035565 containerd[1607]: 2025-11-04 23:55:57.010 [INFO][4200] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="93acdbcb333f8a93ebb5fe99a8dd6df7c610210c340000c55f1efed38c55fe61" Namespace="calico-system" Pod="csi-node-driver-8sppm" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-csi--node--driver--8sppm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--936e1cfeba-k8s-csi--node--driver--8sppm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c9154d8d-6fa3-4eb3-9ec8-93848d59c99a", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 55, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-936e1cfeba", ContainerID:"93acdbcb333f8a93ebb5fe99a8dd6df7c610210c340000c55f1efed38c55fe61", Pod:"csi-node-driver-8sppm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliec2f83339c4", MAC:"9a:a4:88:34:48:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:55:57.035565 containerd[1607]: 2025-11-04 23:55:57.029 [INFO][4200] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="93acdbcb333f8a93ebb5fe99a8dd6df7c610210c340000c55f1efed38c55fe61" Namespace="calico-system" Pod="csi-node-driver-8sppm" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-csi--node--driver--8sppm-eth0" Nov 4 23:55:57.065597 containerd[1607]: time="2025-11-04T23:55:57.065541079Z" level=info msg="connecting to shim 93acdbcb333f8a93ebb5fe99a8dd6df7c610210c340000c55f1efed38c55fe61" address="unix:///run/containerd/s/781369b08c14591c86944b1a6dff0efad54b36986e2f9ee9e3bc15ff5732ebea" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:57.112187 systemd[1]: Started cri-containerd-93acdbcb333f8a93ebb5fe99a8dd6df7c610210c340000c55f1efed38c55fe61.scope - libcontainer container 93acdbcb333f8a93ebb5fe99a8dd6df7c610210c340000c55f1efed38c55fe61. Nov 4 23:55:57.139240 systemd-networkd[1483]: cali2b3d189c08b: Link UP Nov 4 23:55:57.143173 systemd-networkd[1483]: cali2b3d189c08b: Gained carrier Nov 4 23:55:57.178868 containerd[1607]: 2025-11-04 23:55:56.887 [INFO][4202] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--n--936e1cfeba-k8s-calico--kube--controllers--7d5bd6bf98--l6dql-eth0 calico-kube-controllers-7d5bd6bf98- calico-system 9087d4ae-63b3-470b-8bf4-d4e7bf32985a 851 0 2025-11-04 23:55:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d5bd6bf98 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4487.0.0-n-936e1cfeba calico-kube-controllers-7d5bd6bf98-l6dql eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2b3d189c08b [] [] }} ContainerID="d00391be28f5f8e84f9beec4cb26337f85afadc06f6b97c6abbd8cfd120c62c1" Namespace="calico-system" Pod="calico-kube-controllers-7d5bd6bf98-l6dql" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-calico--kube--controllers--7d5bd6bf98--l6dql-" Nov 4 23:55:57.178868 containerd[1607]: 2025-11-04 23:55:56.887 [INFO][4202] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d00391be28f5f8e84f9beec4cb26337f85afadc06f6b97c6abbd8cfd120c62c1" Namespace="calico-system" Pod="calico-kube-controllers-7d5bd6bf98-l6dql" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-calico--kube--controllers--7d5bd6bf98--l6dql-eth0" Nov 4 23:55:57.178868 containerd[1607]: 2025-11-04 23:55:56.953 [INFO][4228] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d00391be28f5f8e84f9beec4cb26337f85afadc06f6b97c6abbd8cfd120c62c1" HandleID="k8s-pod-network.d00391be28f5f8e84f9beec4cb26337f85afadc06f6b97c6abbd8cfd120c62c1" Workload="ci--4487.0.0--n--936e1cfeba-k8s-calico--kube--controllers--7d5bd6bf98--l6dql-eth0" Nov 4 23:55:57.178868 containerd[1607]: 2025-11-04 23:55:56.955 [INFO][4228] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d00391be28f5f8e84f9beec4cb26337f85afadc06f6b97c6abbd8cfd120c62c1" HandleID="k8s-pod-network.d00391be28f5f8e84f9beec4cb26337f85afadc06f6b97c6abbd8cfd120c62c1" Workload="ci--4487.0.0--n--936e1cfeba-k8s-calico--kube--controllers--7d5bd6bf98--l6dql-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003be090), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.0-n-936e1cfeba", "pod":"calico-kube-controllers-7d5bd6bf98-l6dql", "timestamp":"2025-11-04 23:55:56.953895413 +0000 UTC"}, Hostname:"ci-4487.0.0-n-936e1cfeba", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:55:57.178868 containerd[1607]: 2025-11-04 23:55:56.955 [INFO][4228] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:55:57.178868 containerd[1607]: 2025-11-04 23:55:56.990 [INFO][4228] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:55:57.178868 containerd[1607]: 2025-11-04 23:55:56.990 [INFO][4228] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-n-936e1cfeba' Nov 4 23:55:57.178868 containerd[1607]: 2025-11-04 23:55:57.044 [INFO][4228] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d00391be28f5f8e84f9beec4cb26337f85afadc06f6b97c6abbd8cfd120c62c1" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:57.178868 containerd[1607]: 2025-11-04 23:55:57.060 [INFO][4228] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:57.178868 containerd[1607]: 2025-11-04 23:55:57.069 [INFO][4228] ipam/ipam.go 511: Trying affinity for 192.168.64.0/26 host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:57.178868 containerd[1607]: 2025-11-04 23:55:57.073 [INFO][4228] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.0/26 host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:57.178868 containerd[1607]: 2025-11-04 23:55:57.080 [INFO][4228] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.0/26 host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:57.178868 containerd[1607]: 2025-11-04 23:55:57.081 [INFO][4228] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.0/26 handle="k8s-pod-network.d00391be28f5f8e84f9beec4cb26337f85afadc06f6b97c6abbd8cfd120c62c1" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:57.178868 containerd[1607]: 2025-11-04 23:55:57.085 [INFO][4228] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d00391be28f5f8e84f9beec4cb26337f85afadc06f6b97c6abbd8cfd120c62c1 Nov 4 23:55:57.178868 containerd[1607]: 2025-11-04 23:55:57.095 [INFO][4228] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.0/26 handle="k8s-pod-network.d00391be28f5f8e84f9beec4cb26337f85afadc06f6b97c6abbd8cfd120c62c1" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:57.178868 containerd[1607]: 2025-11-04 23:55:57.122 [INFO][4228] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.3/26] block=192.168.64.0/26 handle="k8s-pod-network.d00391be28f5f8e84f9beec4cb26337f85afadc06f6b97c6abbd8cfd120c62c1" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:57.178868 containerd[1607]: 2025-11-04 23:55:57.122 [INFO][4228] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.3/26] handle="k8s-pod-network.d00391be28f5f8e84f9beec4cb26337f85afadc06f6b97c6abbd8cfd120c62c1" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:57.178868 containerd[1607]: 2025-11-04 23:55:57.122 [INFO][4228] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:55:57.179791 containerd[1607]: 2025-11-04 23:55:57.122 [INFO][4228] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.3/26] IPv6=[] ContainerID="d00391be28f5f8e84f9beec4cb26337f85afadc06f6b97c6abbd8cfd120c62c1" HandleID="k8s-pod-network.d00391be28f5f8e84f9beec4cb26337f85afadc06f6b97c6abbd8cfd120c62c1" Workload="ci--4487.0.0--n--936e1cfeba-k8s-calico--kube--controllers--7d5bd6bf98--l6dql-eth0" Nov 4 23:55:57.179791 containerd[1607]: 2025-11-04 23:55:57.131 [INFO][4202] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d00391be28f5f8e84f9beec4cb26337f85afadc06f6b97c6abbd8cfd120c62c1" Namespace="calico-system" Pod="calico-kube-controllers-7d5bd6bf98-l6dql" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-calico--kube--controllers--7d5bd6bf98--l6dql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--936e1cfeba-k8s-calico--kube--controllers--7d5bd6bf98--l6dql-eth0", GenerateName:"calico-kube-controllers-7d5bd6bf98-", Namespace:"calico-system", SelfLink:"", UID:"9087d4ae-63b3-470b-8bf4-d4e7bf32985a", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 55, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d5bd6bf98", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-936e1cfeba", ContainerID:"", Pod:"calico-kube-controllers-7d5bd6bf98-l6dql", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.64.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2b3d189c08b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:55:57.179791 containerd[1607]: 2025-11-04 23:55:57.131 [INFO][4202] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.3/32] ContainerID="d00391be28f5f8e84f9beec4cb26337f85afadc06f6b97c6abbd8cfd120c62c1" Namespace="calico-system" Pod="calico-kube-controllers-7d5bd6bf98-l6dql" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-calico--kube--controllers--7d5bd6bf98--l6dql-eth0" Nov 4 23:55:57.179791 containerd[1607]: 2025-11-04 23:55:57.131 [INFO][4202] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b3d189c08b ContainerID="d00391be28f5f8e84f9beec4cb26337f85afadc06f6b97c6abbd8cfd120c62c1" Namespace="calico-system" Pod="calico-kube-controllers-7d5bd6bf98-l6dql" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-calico--kube--controllers--7d5bd6bf98--l6dql-eth0" Nov 4 23:55:57.179791 containerd[1607]: 2025-11-04 23:55:57.143 [INFO][4202] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d00391be28f5f8e84f9beec4cb26337f85afadc06f6b97c6abbd8cfd120c62c1" Namespace="calico-system" Pod="calico-kube-controllers-7d5bd6bf98-l6dql" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-calico--kube--controllers--7d5bd6bf98--l6dql-eth0" Nov 4 23:55:57.183639 containerd[1607]: 2025-11-04 23:55:57.145 [INFO][4202] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d00391be28f5f8e84f9beec4cb26337f85afadc06f6b97c6abbd8cfd120c62c1" Namespace="calico-system" Pod="calico-kube-controllers-7d5bd6bf98-l6dql" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-calico--kube--controllers--7d5bd6bf98--l6dql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--936e1cfeba-k8s-calico--kube--controllers--7d5bd6bf98--l6dql-eth0", GenerateName:"calico-kube-controllers-7d5bd6bf98-", Namespace:"calico-system", SelfLink:"", UID:"9087d4ae-63b3-470b-8bf4-d4e7bf32985a", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 55, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d5bd6bf98", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-936e1cfeba", ContainerID:"d00391be28f5f8e84f9beec4cb26337f85afadc06f6b97c6abbd8cfd120c62c1", Pod:"calico-kube-controllers-7d5bd6bf98-l6dql", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.64.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2b3d189c08b", MAC:"0e:60:25:05:b3:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:55:57.183639 containerd[1607]: 2025-11-04 23:55:57.172 [INFO][4202] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d00391be28f5f8e84f9beec4cb26337f85afadc06f6b97c6abbd8cfd120c62c1" Namespace="calico-system" Pod="calico-kube-controllers-7d5bd6bf98-l6dql" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-calico--kube--controllers--7d5bd6bf98--l6dql-eth0" Nov 4 23:55:57.222125 containerd[1607]: time="2025-11-04T23:55:57.222058323Z" level=info msg="connecting to shim d00391be28f5f8e84f9beec4cb26337f85afadc06f6b97c6abbd8cfd120c62c1" address="unix:///run/containerd/s/a98a276299139254bd283bb5888a72b211cb53737f65f341cac77d4c744968c6" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:57.256080 systemd[1]: Started cri-containerd-d00391be28f5f8e84f9beec4cb26337f85afadc06f6b97c6abbd8cfd120c62c1.scope - libcontainer container d00391be28f5f8e84f9beec4cb26337f85afadc06f6b97c6abbd8cfd120c62c1. Nov 4 23:55:57.340219 containerd[1607]: time="2025-11-04T23:55:57.340164228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d5bd6bf98-l6dql,Uid:9087d4ae-63b3-470b-8bf4-d4e7bf32985a,Namespace:calico-system,Attempt:0,} returns sandbox id \"d00391be28f5f8e84f9beec4cb26337f85afadc06f6b97c6abbd8cfd120c62c1\"" Nov 4 23:55:57.344221 containerd[1607]: time="2025-11-04T23:55:57.344185140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 23:55:57.399596 containerd[1607]: time="2025-11-04T23:55:57.399552624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8sppm,Uid:c9154d8d-6fa3-4eb3-9ec8-93848d59c99a,Namespace:calico-system,Attempt:0,} returns sandbox id \"93acdbcb333f8a93ebb5fe99a8dd6df7c610210c340000c55f1efed38c55fe61\"" Nov 4 23:55:57.659964 containerd[1607]: time="2025-11-04T23:55:57.659881610Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:57.664198 containerd[1607]: time="2025-11-04T23:55:57.663873064Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 23:55:57.664198 containerd[1607]: time="2025-11-04T23:55:57.664014570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 4 23:55:57.668189 kubelet[2775]: E1104 23:55:57.665096 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:55:57.668189 kubelet[2775]: E1104 23:55:57.665164 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:55:57.668189 kubelet[2775]: E1104 23:55:57.665576 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wsttl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7d5bd6bf98-l6dql_calico-system(9087d4ae-63b3-470b-8bf4-d4e7bf32985a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:57.669708 kubelet[2775]: E1104 23:55:57.669592 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5bd6bf98-l6dql" podUID="9087d4ae-63b3-470b-8bf4-d4e7bf32985a" Nov 4 23:55:57.669763 containerd[1607]: time="2025-11-04T23:55:57.669446450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 23:55:57.976685 containerd[1607]: time="2025-11-04T23:55:57.976549933Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:57.977472 containerd[1607]: time="2025-11-04T23:55:57.977427478Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 4 23:55:57.977566 containerd[1607]: time="2025-11-04T23:55:57.977488455Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 23:55:57.978638 kubelet[2775]: E1104 23:55:57.978171 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:55:57.978638 kubelet[2775]: E1104 23:55:57.978238 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:55:57.981113 kubelet[2775]: E1104 23:55:57.980577 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7shc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8sppm_calico-system(c9154d8d-6fa3-4eb3-9ec8-93848d59c99a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:57.984384 containerd[1607]: time="2025-11-04T23:55:57.984354340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 23:55:58.206616 kubelet[2775]: E1104 23:55:58.206562 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5bd6bf98-l6dql" podUID="9087d4ae-63b3-470b-8bf4-d4e7bf32985a" Nov 4 23:55:58.285576 containerd[1607]: time="2025-11-04T23:55:58.285082730Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:58.286135 containerd[1607]: time="2025-11-04T23:55:58.286078897Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 23:55:58.286245 containerd[1607]: time="2025-11-04T23:55:58.286191543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 4 23:55:58.286669 kubelet[2775]: E1104 23:55:58.286418 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:55:58.286669 kubelet[2775]: E1104 23:55:58.286475 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:55:58.286669 kubelet[2775]: E1104 23:55:58.286616 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7shc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8sppm_calico-system(c9154d8d-6fa3-4eb3-9ec8-93848d59c99a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:58.288214 kubelet[2775]: E1104 23:55:58.288121 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8sppm" podUID="c9154d8d-6fa3-4eb3-9ec8-93848d59c99a" Nov 4 23:55:58.659101 systemd-networkd[1483]: vxlan.calico: Gained IPv6LL Nov 4 23:55:58.723067 systemd-networkd[1483]: caliec2f83339c4: Gained IPv6LL Nov 4 23:55:58.778181 containerd[1607]: time="2025-11-04T23:55:58.778114358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb7b7f48c-4gxxx,Uid:2ae466a7-998d-43d9-8b5d-5ca3ee8d4af6,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:55:58.935183 systemd-networkd[1483]: calicba40dbba77: Link UP Nov 4 23:55:58.937591 systemd-networkd[1483]: calicba40dbba77: Gained carrier Nov 4 23:55:58.970509 containerd[1607]: 2025-11-04 23:55:58.826 [INFO][4385] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--n--936e1cfeba-k8s-calico--apiserver--6fb7b7f48c--4gxxx-eth0 calico-apiserver-6fb7b7f48c- calico-apiserver 2ae466a7-998d-43d9-8b5d-5ca3ee8d4af6 850 0 2025-11-04 23:55:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fb7b7f48c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4487.0.0-n-936e1cfeba calico-apiserver-6fb7b7f48c-4gxxx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicba40dbba77 [] [] }} ContainerID="dab7ec37425e09fe328551d5a8d36d1891b70ef3e1cffffb792013fdeba3b0a3" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7b7f48c-4gxxx" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-calico--apiserver--6fb7b7f48c--4gxxx-" Nov 4 23:55:58.970509 containerd[1607]: 2025-11-04 23:55:58.826 [INFO][4385] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dab7ec37425e09fe328551d5a8d36d1891b70ef3e1cffffb792013fdeba3b0a3" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7b7f48c-4gxxx" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-calico--apiserver--6fb7b7f48c--4gxxx-eth0" Nov 4 23:55:58.970509 containerd[1607]: 2025-11-04 23:55:58.874 [INFO][4396] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dab7ec37425e09fe328551d5a8d36d1891b70ef3e1cffffb792013fdeba3b0a3" HandleID="k8s-pod-network.dab7ec37425e09fe328551d5a8d36d1891b70ef3e1cffffb792013fdeba3b0a3" Workload="ci--4487.0.0--n--936e1cfeba-k8s-calico--apiserver--6fb7b7f48c--4gxxx-eth0" Nov 4 23:55:58.970509 containerd[1607]: 2025-11-04 23:55:58.874 [INFO][4396] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dab7ec37425e09fe328551d5a8d36d1891b70ef3e1cffffb792013fdeba3b0a3" HandleID="k8s-pod-network.dab7ec37425e09fe328551d5a8d36d1891b70ef3e1cffffb792013fdeba3b0a3" Workload="ci--4487.0.0--n--936e1cfeba-k8s-calico--apiserver--6fb7b7f48c--4gxxx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b36e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4487.0.0-n-936e1cfeba", "pod":"calico-apiserver-6fb7b7f48c-4gxxx", "timestamp":"2025-11-04 23:55:58.874081784 +0000 UTC"}, Hostname:"ci-4487.0.0-n-936e1cfeba", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:55:58.970509 containerd[1607]: 2025-11-04 23:55:58.874 [INFO][4396] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:55:58.970509 containerd[1607]: 2025-11-04 23:55:58.874 [INFO][4396] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:55:58.970509 containerd[1607]: 2025-11-04 23:55:58.874 [INFO][4396] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-n-936e1cfeba' Nov 4 23:55:58.970509 containerd[1607]: 2025-11-04 23:55:58.882 [INFO][4396] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dab7ec37425e09fe328551d5a8d36d1891b70ef3e1cffffb792013fdeba3b0a3" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:58.970509 containerd[1607]: 2025-11-04 23:55:58.890 [INFO][4396] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:58.970509 containerd[1607]: 2025-11-04 23:55:58.897 [INFO][4396] ipam/ipam.go 511: Trying affinity for 192.168.64.0/26 host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:58.970509 containerd[1607]: 2025-11-04 23:55:58.900 [INFO][4396] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.0/26 host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:58.970509 containerd[1607]: 2025-11-04 23:55:58.904 [INFO][4396] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.0/26 host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:58.970509 containerd[1607]: 2025-11-04 23:55:58.904 [INFO][4396] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.0/26 handle="k8s-pod-network.dab7ec37425e09fe328551d5a8d36d1891b70ef3e1cffffb792013fdeba3b0a3" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:58.970509 containerd[1607]: 2025-11-04 23:55:58.906 [INFO][4396] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dab7ec37425e09fe328551d5a8d36d1891b70ef3e1cffffb792013fdeba3b0a3 Nov 4 23:55:58.970509 containerd[1607]: 2025-11-04 23:55:58.912 [INFO][4396] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.0/26 handle="k8s-pod-network.dab7ec37425e09fe328551d5a8d36d1891b70ef3e1cffffb792013fdeba3b0a3" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:58.970509 containerd[1607]: 2025-11-04 23:55:58.922 [INFO][4396] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.4/26] block=192.168.64.0/26 handle="k8s-pod-network.dab7ec37425e09fe328551d5a8d36d1891b70ef3e1cffffb792013fdeba3b0a3" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:58.970509 containerd[1607]: 2025-11-04 23:55:58.922 [INFO][4396] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.4/26] handle="k8s-pod-network.dab7ec37425e09fe328551d5a8d36d1891b70ef3e1cffffb792013fdeba3b0a3" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:58.970509 containerd[1607]: 2025-11-04 23:55:58.922 [INFO][4396] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:55:58.971742 containerd[1607]: 2025-11-04 23:55:58.922 [INFO][4396] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.4/26] IPv6=[] ContainerID="dab7ec37425e09fe328551d5a8d36d1891b70ef3e1cffffb792013fdeba3b0a3" HandleID="k8s-pod-network.dab7ec37425e09fe328551d5a8d36d1891b70ef3e1cffffb792013fdeba3b0a3" Workload="ci--4487.0.0--n--936e1cfeba-k8s-calico--apiserver--6fb7b7f48c--4gxxx-eth0" Nov 4 23:55:58.971742 containerd[1607]: 2025-11-04 23:55:58.927 [INFO][4385] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dab7ec37425e09fe328551d5a8d36d1891b70ef3e1cffffb792013fdeba3b0a3" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7b7f48c-4gxxx" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-calico--apiserver--6fb7b7f48c--4gxxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--936e1cfeba-k8s-calico--apiserver--6fb7b7f48c--4gxxx-eth0", GenerateName:"calico-apiserver-6fb7b7f48c-", Namespace:"calico-apiserver", SelfLink:"", UID:"2ae466a7-998d-43d9-8b5d-5ca3ee8d4af6", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 55, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fb7b7f48c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-936e1cfeba", ContainerID:"", Pod:"calico-apiserver-6fb7b7f48c-4gxxx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicba40dbba77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:55:58.971742 containerd[1607]: 2025-11-04 23:55:58.928 [INFO][4385] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.4/32] ContainerID="dab7ec37425e09fe328551d5a8d36d1891b70ef3e1cffffb792013fdeba3b0a3" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7b7f48c-4gxxx" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-calico--apiserver--6fb7b7f48c--4gxxx-eth0" Nov 4 23:55:58.971742 containerd[1607]: 2025-11-04 23:55:58.928 [INFO][4385] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicba40dbba77 ContainerID="dab7ec37425e09fe328551d5a8d36d1891b70ef3e1cffffb792013fdeba3b0a3" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7b7f48c-4gxxx" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-calico--apiserver--6fb7b7f48c--4gxxx-eth0" Nov 4 23:55:58.971742 containerd[1607]: 2025-11-04 23:55:58.939 [INFO][4385] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dab7ec37425e09fe328551d5a8d36d1891b70ef3e1cffffb792013fdeba3b0a3" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7b7f48c-4gxxx" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-calico--apiserver--6fb7b7f48c--4gxxx-eth0" Nov 4 23:55:58.972097 containerd[1607]: 2025-11-04 23:55:58.940 [INFO][4385] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dab7ec37425e09fe328551d5a8d36d1891b70ef3e1cffffb792013fdeba3b0a3" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7b7f48c-4gxxx" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-calico--apiserver--6fb7b7f48c--4gxxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--936e1cfeba-k8s-calico--apiserver--6fb7b7f48c--4gxxx-eth0", GenerateName:"calico-apiserver-6fb7b7f48c-", Namespace:"calico-apiserver", SelfLink:"", UID:"2ae466a7-998d-43d9-8b5d-5ca3ee8d4af6", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 55, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fb7b7f48c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-936e1cfeba", ContainerID:"dab7ec37425e09fe328551d5a8d36d1891b70ef3e1cffffb792013fdeba3b0a3", Pod:"calico-apiserver-6fb7b7f48c-4gxxx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicba40dbba77", MAC:"f2:75:b5:40:02:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:55:58.972097 containerd[1607]: 2025-11-04 23:55:58.963 [INFO][4385] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dab7ec37425e09fe328551d5a8d36d1891b70ef3e1cffffb792013fdeba3b0a3" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7b7f48c-4gxxx" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-calico--apiserver--6fb7b7f48c--4gxxx-eth0" Nov 4 23:55:59.002651 containerd[1607]: time="2025-11-04T23:55:59.002455369Z" level=info msg="connecting to shim dab7ec37425e09fe328551d5a8d36d1891b70ef3e1cffffb792013fdeba3b0a3" address="unix:///run/containerd/s/2be6002227e761d3e21f4e72ebe1fa29752df9efba2514aaa01d1f725be4420c" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:59.043633 systemd-networkd[1483]: cali2b3d189c08b: Gained IPv6LL Nov 4 23:55:59.071690 systemd[1]: Started cri-containerd-dab7ec37425e09fe328551d5a8d36d1891b70ef3e1cffffb792013fdeba3b0a3.scope - libcontainer container dab7ec37425e09fe328551d5a8d36d1891b70ef3e1cffffb792013fdeba3b0a3. Nov 4 23:55:59.161960 containerd[1607]: time="2025-11-04T23:55:59.161899789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb7b7f48c-4gxxx,Uid:2ae466a7-998d-43d9-8b5d-5ca3ee8d4af6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"dab7ec37425e09fe328551d5a8d36d1891b70ef3e1cffffb792013fdeba3b0a3\"" Nov 4 23:55:59.166218 containerd[1607]: time="2025-11-04T23:55:59.166156808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:55:59.216157 kubelet[2775]: E1104 23:55:59.215940 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5bd6bf98-l6dql" podUID="9087d4ae-63b3-470b-8bf4-d4e7bf32985a" Nov 4 23:55:59.217905 kubelet[2775]: E1104 23:55:59.217488 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8sppm" podUID="c9154d8d-6fa3-4eb3-9ec8-93848d59c99a" Nov 4 23:55:59.484008 containerd[1607]: time="2025-11-04T23:55:59.483703936Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:55:59.485142 containerd[1607]: time="2025-11-04T23:55:59.484984671Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:55:59.485142 containerd[1607]: time="2025-11-04T23:55:59.485107838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:55:59.485541 kubelet[2775]: E1104 23:55:59.485478 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:55:59.485652 kubelet[2775]: E1104 23:55:59.485549 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:55:59.486099 kubelet[2775]: E1104 23:55:59.485724 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p75tg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6fb7b7f48c-4gxxx_calico-apiserver(2ae466a7-998d-43d9-8b5d-5ca3ee8d4af6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:55:59.487726 kubelet[2775]: E1104 23:55:59.487674 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb7b7f48c-4gxxx" podUID="2ae466a7-998d-43d9-8b5d-5ca3ee8d4af6" Nov 4 23:55:59.777267 containerd[1607]: time="2025-11-04T23:55:59.777112350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb7b7f48c-cbmng,Uid:3251e39b-c4eb-4874-a146-0948813f5507,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:55:59.945638 systemd-networkd[1483]: cali5b4b78a5698: Link UP Nov 4 23:55:59.947964 systemd-networkd[1483]: cali5b4b78a5698: Gained carrier Nov 4 23:55:59.971583 containerd[1607]: 2025-11-04 23:55:59.838 [INFO][4460] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--n--936e1cfeba-k8s-calico--apiserver--6fb7b7f48c--cbmng-eth0 calico-apiserver-6fb7b7f48c- calico-apiserver 3251e39b-c4eb-4874-a146-0948813f5507 847 0 2025-11-04 23:55:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fb7b7f48c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4487.0.0-n-936e1cfeba calico-apiserver-6fb7b7f48c-cbmng eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5b4b78a5698 [] [] }} ContainerID="2211f4185e89df079a98212932b21f0db561aabaa9a567c3f86c85cd66f9a874" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7b7f48c-cbmng" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-calico--apiserver--6fb7b7f48c--cbmng-" Nov 4 23:55:59.971583 containerd[1607]: 2025-11-04 23:55:59.838 [INFO][4460] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2211f4185e89df079a98212932b21f0db561aabaa9a567c3f86c85cd66f9a874" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7b7f48c-cbmng" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-calico--apiserver--6fb7b7f48c--cbmng-eth0" Nov 4 23:55:59.971583 containerd[1607]: 2025-11-04 23:55:59.880 [INFO][4472] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2211f4185e89df079a98212932b21f0db561aabaa9a567c3f86c85cd66f9a874" HandleID="k8s-pod-network.2211f4185e89df079a98212932b21f0db561aabaa9a567c3f86c85cd66f9a874" Workload="ci--4487.0.0--n--936e1cfeba-k8s-calico--apiserver--6fb7b7f48c--cbmng-eth0" Nov 4 23:55:59.971583 containerd[1607]: 2025-11-04 23:55:59.880 [INFO][4472] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2211f4185e89df079a98212932b21f0db561aabaa9a567c3f86c85cd66f9a874" HandleID="k8s-pod-network.2211f4185e89df079a98212932b21f0db561aabaa9a567c3f86c85cd66f9a874" Workload="ci--4487.0.0--n--936e1cfeba-k8s-calico--apiserver--6fb7b7f48c--cbmng-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f2a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4487.0.0-n-936e1cfeba", "pod":"calico-apiserver-6fb7b7f48c-cbmng", "timestamp":"2025-11-04 23:55:59.880382828 +0000 UTC"}, Hostname:"ci-4487.0.0-n-936e1cfeba", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:55:59.971583 containerd[1607]: 2025-11-04 23:55:59.880 [INFO][4472] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:55:59.971583 containerd[1607]: 2025-11-04 23:55:59.880 [INFO][4472] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:55:59.971583 containerd[1607]: 2025-11-04 23:55:59.880 [INFO][4472] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-n-936e1cfeba' Nov 4 23:55:59.971583 containerd[1607]: 2025-11-04 23:55:59.893 [INFO][4472] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2211f4185e89df079a98212932b21f0db561aabaa9a567c3f86c85cd66f9a874" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:59.971583 containerd[1607]: 2025-11-04 23:55:59.901 [INFO][4472] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:59.971583 containerd[1607]: 2025-11-04 23:55:59.908 [INFO][4472] ipam/ipam.go 511: Trying affinity for 192.168.64.0/26 host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:59.971583 containerd[1607]: 2025-11-04 23:55:59.911 [INFO][4472] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.0/26 host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:59.971583 containerd[1607]: 2025-11-04 23:55:59.914 [INFO][4472] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.0/26 host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:59.971583 containerd[1607]: 2025-11-04 23:55:59.914 [INFO][4472] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.0/26 handle="k8s-pod-network.2211f4185e89df079a98212932b21f0db561aabaa9a567c3f86c85cd66f9a874" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:59.971583 containerd[1607]: 2025-11-04 23:55:59.917 [INFO][4472] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2211f4185e89df079a98212932b21f0db561aabaa9a567c3f86c85cd66f9a874 Nov 4 23:55:59.971583 containerd[1607]: 2025-11-04 23:55:59.924 [INFO][4472] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.0/26 handle="k8s-pod-network.2211f4185e89df079a98212932b21f0db561aabaa9a567c3f86c85cd66f9a874" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:59.971583 containerd[1607]: 2025-11-04 23:55:59.934 [INFO][4472] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.5/26] block=192.168.64.0/26 handle="k8s-pod-network.2211f4185e89df079a98212932b21f0db561aabaa9a567c3f86c85cd66f9a874" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:59.971583 containerd[1607]: 2025-11-04 23:55:59.934 [INFO][4472] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.5/26] handle="k8s-pod-network.2211f4185e89df079a98212932b21f0db561aabaa9a567c3f86c85cd66f9a874" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:55:59.971583 containerd[1607]: 2025-11-04 23:55:59.934 [INFO][4472] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:55:59.974246 containerd[1607]: 2025-11-04 23:55:59.934 [INFO][4472] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.5/26] IPv6=[] ContainerID="2211f4185e89df079a98212932b21f0db561aabaa9a567c3f86c85cd66f9a874" HandleID="k8s-pod-network.2211f4185e89df079a98212932b21f0db561aabaa9a567c3f86c85cd66f9a874" Workload="ci--4487.0.0--n--936e1cfeba-k8s-calico--apiserver--6fb7b7f48c--cbmng-eth0" Nov 4 23:55:59.974246 containerd[1607]: 2025-11-04 23:55:59.938 [INFO][4460] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2211f4185e89df079a98212932b21f0db561aabaa9a567c3f86c85cd66f9a874" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7b7f48c-cbmng" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-calico--apiserver--6fb7b7f48c--cbmng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--936e1cfeba-k8s-calico--apiserver--6fb7b7f48c--cbmng-eth0", GenerateName:"calico-apiserver-6fb7b7f48c-", Namespace:"calico-apiserver", SelfLink:"", UID:"3251e39b-c4eb-4874-a146-0948813f5507", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 55, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fb7b7f48c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-936e1cfeba", ContainerID:"", Pod:"calico-apiserver-6fb7b7f48c-cbmng", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5b4b78a5698", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:55:59.974246 containerd[1607]: 2025-11-04 23:55:59.938 [INFO][4460] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.5/32] ContainerID="2211f4185e89df079a98212932b21f0db561aabaa9a567c3f86c85cd66f9a874" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7b7f48c-cbmng" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-calico--apiserver--6fb7b7f48c--cbmng-eth0" Nov 4 23:55:59.974246 containerd[1607]: 2025-11-04 23:55:59.938 [INFO][4460] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b4b78a5698 ContainerID="2211f4185e89df079a98212932b21f0db561aabaa9a567c3f86c85cd66f9a874" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7b7f48c-cbmng" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-calico--apiserver--6fb7b7f48c--cbmng-eth0" Nov 4 23:55:59.974246 containerd[1607]: 2025-11-04 23:55:59.946 [INFO][4460] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2211f4185e89df079a98212932b21f0db561aabaa9a567c3f86c85cd66f9a874" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7b7f48c-cbmng" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-calico--apiserver--6fb7b7f48c--cbmng-eth0" Nov 4 23:55:59.974662 containerd[1607]: 2025-11-04 23:55:59.947 [INFO][4460] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2211f4185e89df079a98212932b21f0db561aabaa9a567c3f86c85cd66f9a874" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7b7f48c-cbmng" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-calico--apiserver--6fb7b7f48c--cbmng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--936e1cfeba-k8s-calico--apiserver--6fb7b7f48c--cbmng-eth0", GenerateName:"calico-apiserver-6fb7b7f48c-", Namespace:"calico-apiserver", SelfLink:"", UID:"3251e39b-c4eb-4874-a146-0948813f5507", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 55, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fb7b7f48c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-936e1cfeba", ContainerID:"2211f4185e89df079a98212932b21f0db561aabaa9a567c3f86c85cd66f9a874", Pod:"calico-apiserver-6fb7b7f48c-cbmng", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5b4b78a5698", MAC:"6e:6c:45:36:32:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:55:59.974662 containerd[1607]: 2025-11-04 23:55:59.965 [INFO][4460] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2211f4185e89df079a98212932b21f0db561aabaa9a567c3f86c85cd66f9a874" Namespace="calico-apiserver" Pod="calico-apiserver-6fb7b7f48c-cbmng" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-calico--apiserver--6fb7b7f48c--cbmng-eth0" Nov 4 23:56:00.010645 containerd[1607]: time="2025-11-04T23:56:00.010589828Z" level=info msg="connecting to shim 2211f4185e89df079a98212932b21f0db561aabaa9a567c3f86c85cd66f9a874" address="unix:///run/containerd/s/da76641c362f8c346b45b5379cd03d763bca42d7d68f62ca0369d1e5754eed9c" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:00.062156 systemd[1]: Started cri-containerd-2211f4185e89df079a98212932b21f0db561aabaa9a567c3f86c85cd66f9a874.scope - libcontainer container 2211f4185e89df079a98212932b21f0db561aabaa9a567c3f86c85cd66f9a874. Nov 4 23:56:00.132885 containerd[1607]: time="2025-11-04T23:56:00.132716777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb7b7f48c-cbmng,Uid:3251e39b-c4eb-4874-a146-0948813f5507,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2211f4185e89df079a98212932b21f0db561aabaa9a567c3f86c85cd66f9a874\"" Nov 4 23:56:00.136118 containerd[1607]: time="2025-11-04T23:56:00.136005142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:56:00.223411 kubelet[2775]: E1104 23:56:00.223324 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb7b7f48c-4gxxx" podUID="2ae466a7-998d-43d9-8b5d-5ca3ee8d4af6" Nov 4 23:56:00.324642 systemd-networkd[1483]: calicba40dbba77: Gained IPv6LL Nov 4 23:56:00.449439 containerd[1607]: time="2025-11-04T23:56:00.449300777Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:00.450703 containerd[1607]: time="2025-11-04T23:56:00.450533183Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:56:00.451161 containerd[1607]: time="2025-11-04T23:56:00.450686593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:56:00.451359 kubelet[2775]: E1104 23:56:00.451308 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:56:00.451983 kubelet[2775]: E1104 23:56:00.451370 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:56:00.451983 kubelet[2775]: E1104 23:56:00.451544 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v7gnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6fb7b7f48c-cbmng_calico-apiserver(3251e39b-c4eb-4874-a146-0948813f5507): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:00.453757 kubelet[2775]: E1104 23:56:00.453664 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb7b7f48c-cbmng" podUID="3251e39b-c4eb-4874-a146-0948813f5507" Nov 4 23:56:00.778751 kubelet[2775]: E1104 23:56:00.778010 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:56:00.779964 kubelet[2775]: E1104 23:56:00.779262 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:56:00.781028 containerd[1607]: time="2025-11-04T23:56:00.780987535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gjgtk,Uid:ae96bfe9-1b65-45cb-977e-23d44d98b741,Namespace:calico-system,Attempt:0,}" Nov 4 23:56:00.781848 containerd[1607]: time="2025-11-04T23:56:00.781798289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2gxn6,Uid:c8424ca1-9bdd-4f3c-ba8d-b16c31b9ab19,Namespace:kube-system,Attempt:0,}" Nov 4 23:56:00.782287 containerd[1607]: time="2025-11-04T23:56:00.782248623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5kmzn,Uid:f648fe07-b491-4dfe-97d4-96bc7bd0b7c5,Namespace:kube-system,Attempt:0,}" Nov 4 23:56:01.044752 systemd-networkd[1483]: calib63fcdd1cfb: Link UP Nov 4 23:56:01.047153 systemd-networkd[1483]: calib63fcdd1cfb: Gained carrier Nov 4 23:56:01.077036 containerd[1607]: 2025-11-04 23:56:00.903 [INFO][4537] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--n--936e1cfeba-k8s-coredns--674b8bbfcf--2gxn6-eth0 coredns-674b8bbfcf- kube-system c8424ca1-9bdd-4f3c-ba8d-b16c31b9ab19 846 0 2025-11-04 23:55:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4487.0.0-n-936e1cfeba coredns-674b8bbfcf-2gxn6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib63fcdd1cfb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="879759f1e9862a9d2a39d90ea92ff37a2c69ecd4797c1123bd5a7958023731f8" Namespace="kube-system" Pod="coredns-674b8bbfcf-2gxn6" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-coredns--674b8bbfcf--2gxn6-" Nov 4 23:56:01.077036 containerd[1607]: 2025-11-04 23:56:00.904 [INFO][4537] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="879759f1e9862a9d2a39d90ea92ff37a2c69ecd4797c1123bd5a7958023731f8" Namespace="kube-system" Pod="coredns-674b8bbfcf-2gxn6" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-coredns--674b8bbfcf--2gxn6-eth0" Nov 4 23:56:01.077036 containerd[1607]: 2025-11-04 23:56:00.958 [INFO][4576] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="879759f1e9862a9d2a39d90ea92ff37a2c69ecd4797c1123bd5a7958023731f8" HandleID="k8s-pod-network.879759f1e9862a9d2a39d90ea92ff37a2c69ecd4797c1123bd5a7958023731f8" Workload="ci--4487.0.0--n--936e1cfeba-k8s-coredns--674b8bbfcf--2gxn6-eth0" Nov 4 23:56:01.077036 containerd[1607]: 2025-11-04 23:56:00.959 [INFO][4576] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="879759f1e9862a9d2a39d90ea92ff37a2c69ecd4797c1123bd5a7958023731f8" HandleID="k8s-pod-network.879759f1e9862a9d2a39d90ea92ff37a2c69ecd4797c1123bd5a7958023731f8" Workload="ci--4487.0.0--n--936e1cfeba-k8s-coredns--674b8bbfcf--2gxn6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5be0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4487.0.0-n-936e1cfeba", "pod":"coredns-674b8bbfcf-2gxn6", "timestamp":"2025-11-04 23:56:00.958750358 +0000 UTC"}, Hostname:"ci-4487.0.0-n-936e1cfeba", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:56:01.077036 containerd[1607]: 2025-11-04 23:56:00.959 [INFO][4576] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:56:01.077036 containerd[1607]: 2025-11-04 23:56:00.959 [INFO][4576] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:56:01.077036 containerd[1607]: 2025-11-04 23:56:00.959 [INFO][4576] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-n-936e1cfeba' Nov 4 23:56:01.077036 containerd[1607]: 2025-11-04 23:56:00.972 [INFO][4576] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.879759f1e9862a9d2a39d90ea92ff37a2c69ecd4797c1123bd5a7958023731f8" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:56:01.077036 containerd[1607]: 2025-11-04 23:56:00.984 [INFO][4576] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:56:01.077036 containerd[1607]: 2025-11-04 23:56:00.994 [INFO][4576] ipam/ipam.go 511: Trying affinity for 192.168.64.0/26 host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:56:01.077036 containerd[1607]: 2025-11-04 23:56:01.000 [INFO][4576] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.0/26 host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:56:01.077036 containerd[1607]: 2025-11-04 23:56:01.005 [INFO][4576] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.0/26 host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:56:01.077036 containerd[1607]: 2025-11-04 23:56:01.005 [INFO][4576] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.0/26 handle="k8s-pod-network.879759f1e9862a9d2a39d90ea92ff37a2c69ecd4797c1123bd5a7958023731f8" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:56:01.077036 containerd[1607]: 2025-11-04 23:56:01.010 [INFO][4576] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.879759f1e9862a9d2a39d90ea92ff37a2c69ecd4797c1123bd5a7958023731f8 Nov 4 23:56:01.077036 containerd[1607]: 2025-11-04 23:56:01.018 [INFO][4576] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.0/26 handle="k8s-pod-network.879759f1e9862a9d2a39d90ea92ff37a2c69ecd4797c1123bd5a7958023731f8" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:56:01.077036 containerd[1607]: 2025-11-04 23:56:01.027 [INFO][4576] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.6/26] block=192.168.64.0/26 handle="k8s-pod-network.879759f1e9862a9d2a39d90ea92ff37a2c69ecd4797c1123bd5a7958023731f8" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:56:01.077036 containerd[1607]: 2025-11-04 23:56:01.027 [INFO][4576] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.6/26] handle="k8s-pod-network.879759f1e9862a9d2a39d90ea92ff37a2c69ecd4797c1123bd5a7958023731f8" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:56:01.077036 containerd[1607]: 2025-11-04 23:56:01.027 [INFO][4576] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:56:01.077036 containerd[1607]: 2025-11-04 23:56:01.027 [INFO][4576] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.6/26] IPv6=[] ContainerID="879759f1e9862a9d2a39d90ea92ff37a2c69ecd4797c1123bd5a7958023731f8" HandleID="k8s-pod-network.879759f1e9862a9d2a39d90ea92ff37a2c69ecd4797c1123bd5a7958023731f8" Workload="ci--4487.0.0--n--936e1cfeba-k8s-coredns--674b8bbfcf--2gxn6-eth0" Nov 4 23:56:01.080314 containerd[1607]: 2025-11-04 23:56:01.032 [INFO][4537] cni-plugin/k8s.go 418: Populated endpoint ContainerID="879759f1e9862a9d2a39d90ea92ff37a2c69ecd4797c1123bd5a7958023731f8" Namespace="kube-system" Pod="coredns-674b8bbfcf-2gxn6" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-coredns--674b8bbfcf--2gxn6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--936e1cfeba-k8s-coredns--674b8bbfcf--2gxn6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c8424ca1-9bdd-4f3c-ba8d-b16c31b9ab19", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 55, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-936e1cfeba", ContainerID:"", Pod:"coredns-674b8bbfcf-2gxn6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib63fcdd1cfb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:01.080314 containerd[1607]: 2025-11-04 23:56:01.033 [INFO][4537] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.6/32] ContainerID="879759f1e9862a9d2a39d90ea92ff37a2c69ecd4797c1123bd5a7958023731f8" Namespace="kube-system" Pod="coredns-674b8bbfcf-2gxn6" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-coredns--674b8bbfcf--2gxn6-eth0" Nov 4 23:56:01.080314 containerd[1607]: 2025-11-04 23:56:01.033 [INFO][4537] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib63fcdd1cfb ContainerID="879759f1e9862a9d2a39d90ea92ff37a2c69ecd4797c1123bd5a7958023731f8" Namespace="kube-system" Pod="coredns-674b8bbfcf-2gxn6" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-coredns--674b8bbfcf--2gxn6-eth0" Nov 4 23:56:01.080314 containerd[1607]: 2025-11-04 23:56:01.048 [INFO][4537] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="879759f1e9862a9d2a39d90ea92ff37a2c69ecd4797c1123bd5a7958023731f8" Namespace="kube-system" Pod="coredns-674b8bbfcf-2gxn6" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-coredns--674b8bbfcf--2gxn6-eth0" Nov 4 23:56:01.081269 containerd[1607]: 2025-11-04 23:56:01.051 [INFO][4537] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="879759f1e9862a9d2a39d90ea92ff37a2c69ecd4797c1123bd5a7958023731f8" Namespace="kube-system" Pod="coredns-674b8bbfcf-2gxn6" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-coredns--674b8bbfcf--2gxn6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--936e1cfeba-k8s-coredns--674b8bbfcf--2gxn6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c8424ca1-9bdd-4f3c-ba8d-b16c31b9ab19", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 55, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-936e1cfeba", ContainerID:"879759f1e9862a9d2a39d90ea92ff37a2c69ecd4797c1123bd5a7958023731f8", Pod:"coredns-674b8bbfcf-2gxn6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib63fcdd1cfb", MAC:"6a:07:02:80:2b:08", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:01.081269 containerd[1607]: 2025-11-04 23:56:01.071 [INFO][4537] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="879759f1e9862a9d2a39d90ea92ff37a2c69ecd4797c1123bd5a7958023731f8" Namespace="kube-system" Pod="coredns-674b8bbfcf-2gxn6" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-coredns--674b8bbfcf--2gxn6-eth0" Nov 4 23:56:01.092478 systemd-networkd[1483]: cali5b4b78a5698: Gained IPv6LL Nov 4 23:56:01.125463 containerd[1607]: time="2025-11-04T23:56:01.125211054Z" level=info msg="connecting to shim 879759f1e9862a9d2a39d90ea92ff37a2c69ecd4797c1123bd5a7958023731f8" address="unix:///run/containerd/s/e53b42b1565f86cc60c10113724b75790bfa7539a1e14abd03121c14161b464b" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:01.190526 systemd[1]: Started cri-containerd-879759f1e9862a9d2a39d90ea92ff37a2c69ecd4797c1123bd5a7958023731f8.scope - libcontainer container 879759f1e9862a9d2a39d90ea92ff37a2c69ecd4797c1123bd5a7958023731f8. Nov 4 23:56:01.197625 systemd-networkd[1483]: cali5828ff7e535: Link UP Nov 4 23:56:01.197907 systemd-networkd[1483]: cali5828ff7e535: Gained carrier Nov 4 23:56:01.229051 kubelet[2775]: E1104 23:56:01.228902 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb7b7f48c-cbmng" podUID="3251e39b-c4eb-4874-a146-0948813f5507" Nov 4 23:56:01.237202 containerd[1607]: 2025-11-04 23:56:00.900 [INFO][4538] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--n--936e1cfeba-k8s-goldmane--666569f655--gjgtk-eth0 goldmane-666569f655- calico-system ae96bfe9-1b65-45cb-977e-23d44d98b741 853 0 2025-11-04 23:55:32 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4487.0.0-n-936e1cfeba goldmane-666569f655-gjgtk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5828ff7e535 [] [] }} ContainerID="2a9d34527572efed837bad306d5ef35726e46840d0197894f20e9fdcf5753743" Namespace="calico-system" Pod="goldmane-666569f655-gjgtk" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-goldmane--666569f655--gjgtk-" Nov 4 23:56:01.237202 containerd[1607]: 2025-11-04 23:56:00.900 [INFO][4538] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a9d34527572efed837bad306d5ef35726e46840d0197894f20e9fdcf5753743" Namespace="calico-system" Pod="goldmane-666569f655-gjgtk" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-goldmane--666569f655--gjgtk-eth0" Nov 4 23:56:01.237202 containerd[1607]: 2025-11-04 23:56:00.988 [INFO][4578] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a9d34527572efed837bad306d5ef35726e46840d0197894f20e9fdcf5753743" HandleID="k8s-pod-network.2a9d34527572efed837bad306d5ef35726e46840d0197894f20e9fdcf5753743" Workload="ci--4487.0.0--n--936e1cfeba-k8s-goldmane--666569f655--gjgtk-eth0" Nov 4 23:56:01.237202 containerd[1607]: 2025-11-04 23:56:00.988 [INFO][4578] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2a9d34527572efed837bad306d5ef35726e46840d0197894f20e9fdcf5753743" HandleID="k8s-pod-network.2a9d34527572efed837bad306d5ef35726e46840d0197894f20e9fdcf5753743" Workload="ci--4487.0.0--n--936e1cfeba-k8s-goldmane--666569f655--gjgtk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031ace0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.0-n-936e1cfeba", "pod":"goldmane-666569f655-gjgtk", "timestamp":"2025-11-04 23:56:00.988037866 +0000 UTC"}, Hostname:"ci-4487.0.0-n-936e1cfeba", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:56:01.237202 containerd[1607]: 2025-11-04 23:56:00.988 [INFO][4578] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:56:01.237202 containerd[1607]: 2025-11-04 23:56:01.027 [INFO][4578] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:56:01.237202 containerd[1607]: 2025-11-04 23:56:01.027 [INFO][4578] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-n-936e1cfeba' Nov 4 23:56:01.237202 containerd[1607]: 2025-11-04 23:56:01.074 [INFO][4578] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a9d34527572efed837bad306d5ef35726e46840d0197894f20e9fdcf5753743" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:56:01.237202 containerd[1607]: 2025-11-04 23:56:01.089 [INFO][4578] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:56:01.237202 containerd[1607]: 2025-11-04 23:56:01.118 [INFO][4578] ipam/ipam.go 511: Trying affinity for 192.168.64.0/26 host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:56:01.237202 containerd[1607]: 2025-11-04 23:56:01.128 [INFO][4578] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.0/26 host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:56:01.237202 containerd[1607]: 2025-11-04 23:56:01.136 [INFO][4578] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.0/26 host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:56:01.237202 containerd[1607]: 2025-11-04 23:56:01.136 [INFO][4578] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.0/26 handle="k8s-pod-network.2a9d34527572efed837bad306d5ef35726e46840d0197894f20e9fdcf5753743" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:56:01.237202 containerd[1607]: 2025-11-04 23:56:01.146 [INFO][4578] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2a9d34527572efed837bad306d5ef35726e46840d0197894f20e9fdcf5753743 Nov 4 23:56:01.237202 containerd[1607]: 2025-11-04 23:56:01.156 [INFO][4578] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.0/26 handle="k8s-pod-network.2a9d34527572efed837bad306d5ef35726e46840d0197894f20e9fdcf5753743" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:56:01.237202 containerd[1607]: 2025-11-04 23:56:01.176 [INFO][4578] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.7/26] block=192.168.64.0/26 handle="k8s-pod-network.2a9d34527572efed837bad306d5ef35726e46840d0197894f20e9fdcf5753743" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:56:01.237202 containerd[1607]: 2025-11-04 23:56:01.176 [INFO][4578] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.7/26] handle="k8s-pod-network.2a9d34527572efed837bad306d5ef35726e46840d0197894f20e9fdcf5753743" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:56:01.237202 containerd[1607]: 2025-11-04 23:56:01.176 [INFO][4578] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:56:01.237202 containerd[1607]: 2025-11-04 23:56:01.177 [INFO][4578] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.7/26] IPv6=[] ContainerID="2a9d34527572efed837bad306d5ef35726e46840d0197894f20e9fdcf5753743" HandleID="k8s-pod-network.2a9d34527572efed837bad306d5ef35726e46840d0197894f20e9fdcf5753743" Workload="ci--4487.0.0--n--936e1cfeba-k8s-goldmane--666569f655--gjgtk-eth0" Nov 4 23:56:01.239766 containerd[1607]: 2025-11-04 23:56:01.186 [INFO][4538] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a9d34527572efed837bad306d5ef35726e46840d0197894f20e9fdcf5753743" Namespace="calico-system" Pod="goldmane-666569f655-gjgtk" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-goldmane--666569f655--gjgtk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--936e1cfeba-k8s-goldmane--666569f655--gjgtk-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ae96bfe9-1b65-45cb-977e-23d44d98b741", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 55, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-936e1cfeba", ContainerID:"", Pod:"goldmane-666569f655-gjgtk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.64.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5828ff7e535", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:01.239766 containerd[1607]: 2025-11-04 23:56:01.187 [INFO][4538] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.7/32] ContainerID="2a9d34527572efed837bad306d5ef35726e46840d0197894f20e9fdcf5753743" Namespace="calico-system" Pod="goldmane-666569f655-gjgtk" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-goldmane--666569f655--gjgtk-eth0" Nov 4 23:56:01.239766 containerd[1607]: 2025-11-04 23:56:01.188 [INFO][4538] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5828ff7e535 ContainerID="2a9d34527572efed837bad306d5ef35726e46840d0197894f20e9fdcf5753743" Namespace="calico-system" Pod="goldmane-666569f655-gjgtk" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-goldmane--666569f655--gjgtk-eth0" Nov 4 23:56:01.239766 containerd[1607]: 2025-11-04 23:56:01.197 [INFO][4538] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a9d34527572efed837bad306d5ef35726e46840d0197894f20e9fdcf5753743" Namespace="calico-system" Pod="goldmane-666569f655-gjgtk" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-goldmane--666569f655--gjgtk-eth0" Nov 4 23:56:01.239766 containerd[1607]: 2025-11-04 23:56:01.199 [INFO][4538] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a9d34527572efed837bad306d5ef35726e46840d0197894f20e9fdcf5753743" Namespace="calico-system" Pod="goldmane-666569f655-gjgtk" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-goldmane--666569f655--gjgtk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--936e1cfeba-k8s-goldmane--666569f655--gjgtk-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ae96bfe9-1b65-45cb-977e-23d44d98b741", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 55, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-936e1cfeba", ContainerID:"2a9d34527572efed837bad306d5ef35726e46840d0197894f20e9fdcf5753743", Pod:"goldmane-666569f655-gjgtk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.64.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5828ff7e535", MAC:"ba:b6:4f:86:5f:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:01.239766 containerd[1607]: 2025-11-04 23:56:01.231 [INFO][4538] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a9d34527572efed837bad306d5ef35726e46840d0197894f20e9fdcf5753743" Namespace="calico-system" Pod="goldmane-666569f655-gjgtk" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-goldmane--666569f655--gjgtk-eth0" Nov 4 23:56:01.321126 containerd[1607]: time="2025-11-04T23:56:01.321059464Z" level=info msg="connecting to shim 2a9d34527572efed837bad306d5ef35726e46840d0197894f20e9fdcf5753743" address="unix:///run/containerd/s/0a3d1dfe2063795e2bba63ef1a9cec23dec10dc109bcabcd06a38e4581cc5951" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:01.349064 systemd-networkd[1483]: cali6c498cefbc8: Link UP Nov 4 23:56:01.351703 systemd-networkd[1483]: cali6c498cefbc8: Gained carrier Nov 4 23:56:01.409296 containerd[1607]: 2025-11-04 23:56:00.913 [INFO][4545] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--n--936e1cfeba-k8s-coredns--674b8bbfcf--5kmzn-eth0 coredns-674b8bbfcf- kube-system f648fe07-b491-4dfe-97d4-96bc7bd0b7c5 852 0 2025-11-04 23:55:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4487.0.0-n-936e1cfeba coredns-674b8bbfcf-5kmzn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6c498cefbc8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cb0b0ddef1f509c4f6e138645be9ee7e01567de63e00de4cbd3684268d8f9fd5" Namespace="kube-system" Pod="coredns-674b8bbfcf-5kmzn" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-coredns--674b8bbfcf--5kmzn-" Nov 4 23:56:01.409296 containerd[1607]: 2025-11-04 23:56:00.914 [INFO][4545] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cb0b0ddef1f509c4f6e138645be9ee7e01567de63e00de4cbd3684268d8f9fd5" Namespace="kube-system" Pod="coredns-674b8bbfcf-5kmzn" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-coredns--674b8bbfcf--5kmzn-eth0" Nov 4 23:56:01.409296 containerd[1607]: 2025-11-04 23:56:01.006 [INFO][4586] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb0b0ddef1f509c4f6e138645be9ee7e01567de63e00de4cbd3684268d8f9fd5" HandleID="k8s-pod-network.cb0b0ddef1f509c4f6e138645be9ee7e01567de63e00de4cbd3684268d8f9fd5" Workload="ci--4487.0.0--n--936e1cfeba-k8s-coredns--674b8bbfcf--5kmzn-eth0" Nov 4 23:56:01.409296 containerd[1607]: 2025-11-04 23:56:01.007 [INFO][4586] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cb0b0ddef1f509c4f6e138645be9ee7e01567de63e00de4cbd3684268d8f9fd5" HandleID="k8s-pod-network.cb0b0ddef1f509c4f6e138645be9ee7e01567de63e00de4cbd3684268d8f9fd5" Workload="ci--4487.0.0--n--936e1cfeba-k8s-coredns--674b8bbfcf--5kmzn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5d20), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4487.0.0-n-936e1cfeba", "pod":"coredns-674b8bbfcf-5kmzn", "timestamp":"2025-11-04 23:56:01.006789741 +0000 UTC"}, Hostname:"ci-4487.0.0-n-936e1cfeba", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:56:01.409296 containerd[1607]: 2025-11-04 23:56:01.007 [INFO][4586] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:56:01.409296 containerd[1607]: 2025-11-04 23:56:01.176 [INFO][4586] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:56:01.409296 containerd[1607]: 2025-11-04 23:56:01.177 [INFO][4586] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-n-936e1cfeba' Nov 4 23:56:01.409296 containerd[1607]: 2025-11-04 23:56:01.207 [INFO][4586] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cb0b0ddef1f509c4f6e138645be9ee7e01567de63e00de4cbd3684268d8f9fd5" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:56:01.409296 containerd[1607]: 2025-11-04 23:56:01.229 [INFO][4586] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:56:01.409296 containerd[1607]: 2025-11-04 23:56:01.261 [INFO][4586] ipam/ipam.go 511: Trying affinity for 192.168.64.0/26 host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:56:01.409296 containerd[1607]: 2025-11-04 23:56:01.273 [INFO][4586] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.0/26 host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:56:01.409296 containerd[1607]: 2025-11-04 23:56:01.283 [INFO][4586] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.0/26 host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:56:01.409296 containerd[1607]: 2025-11-04 23:56:01.283 [INFO][4586] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.0/26 handle="k8s-pod-network.cb0b0ddef1f509c4f6e138645be9ee7e01567de63e00de4cbd3684268d8f9fd5" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:56:01.409296 containerd[1607]: 2025-11-04 23:56:01.287 [INFO][4586] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cb0b0ddef1f509c4f6e138645be9ee7e01567de63e00de4cbd3684268d8f9fd5 Nov 4 23:56:01.409296 containerd[1607]: 2025-11-04 23:56:01.301 [INFO][4586] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.0/26 handle="k8s-pod-network.cb0b0ddef1f509c4f6e138645be9ee7e01567de63e00de4cbd3684268d8f9fd5" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:56:01.409296 containerd[1607]: 2025-11-04 23:56:01.323 [INFO][4586] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.8/26] block=192.168.64.0/26 handle="k8s-pod-network.cb0b0ddef1f509c4f6e138645be9ee7e01567de63e00de4cbd3684268d8f9fd5" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:56:01.409296 containerd[1607]: 2025-11-04 23:56:01.324 [INFO][4586] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.8/26] handle="k8s-pod-network.cb0b0ddef1f509c4f6e138645be9ee7e01567de63e00de4cbd3684268d8f9fd5" host="ci-4487.0.0-n-936e1cfeba" Nov 4 23:56:01.409296 containerd[1607]: 2025-11-04 23:56:01.324 [INFO][4586] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:56:01.409296 containerd[1607]: 2025-11-04 23:56:01.324 [INFO][4586] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.8/26] IPv6=[] ContainerID="cb0b0ddef1f509c4f6e138645be9ee7e01567de63e00de4cbd3684268d8f9fd5" HandleID="k8s-pod-network.cb0b0ddef1f509c4f6e138645be9ee7e01567de63e00de4cbd3684268d8f9fd5" Workload="ci--4487.0.0--n--936e1cfeba-k8s-coredns--674b8bbfcf--5kmzn-eth0" Nov 4 23:56:01.412271 containerd[1607]: 2025-11-04 23:56:01.331 [INFO][4545] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cb0b0ddef1f509c4f6e138645be9ee7e01567de63e00de4cbd3684268d8f9fd5" Namespace="kube-system" Pod="coredns-674b8bbfcf-5kmzn" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-coredns--674b8bbfcf--5kmzn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--936e1cfeba-k8s-coredns--674b8bbfcf--5kmzn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f648fe07-b491-4dfe-97d4-96bc7bd0b7c5", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 55, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-936e1cfeba", ContainerID:"", Pod:"coredns-674b8bbfcf-5kmzn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c498cefbc8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:01.412271 containerd[1607]: 2025-11-04 23:56:01.331 [INFO][4545] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.8/32] ContainerID="cb0b0ddef1f509c4f6e138645be9ee7e01567de63e00de4cbd3684268d8f9fd5" Namespace="kube-system" Pod="coredns-674b8bbfcf-5kmzn" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-coredns--674b8bbfcf--5kmzn-eth0" Nov 4 23:56:01.412271 containerd[1607]: 2025-11-04 23:56:01.331 [INFO][4545] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c498cefbc8 ContainerID="cb0b0ddef1f509c4f6e138645be9ee7e01567de63e00de4cbd3684268d8f9fd5" Namespace="kube-system" Pod="coredns-674b8bbfcf-5kmzn" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-coredns--674b8bbfcf--5kmzn-eth0" Nov 4 23:56:01.412271 containerd[1607]: 2025-11-04 23:56:01.352 [INFO][4545] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cb0b0ddef1f509c4f6e138645be9ee7e01567de63e00de4cbd3684268d8f9fd5" Namespace="kube-system" Pod="coredns-674b8bbfcf-5kmzn" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-coredns--674b8bbfcf--5kmzn-eth0" Nov 4 23:56:01.413453 containerd[1607]: 2025-11-04 23:56:01.353 [INFO][4545] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cb0b0ddef1f509c4f6e138645be9ee7e01567de63e00de4cbd3684268d8f9fd5" Namespace="kube-system" Pod="coredns-674b8bbfcf-5kmzn" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-coredns--674b8bbfcf--5kmzn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--n--936e1cfeba-k8s-coredns--674b8bbfcf--5kmzn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f648fe07-b491-4dfe-97d4-96bc7bd0b7c5", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 55, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-n-936e1cfeba", ContainerID:"cb0b0ddef1f509c4f6e138645be9ee7e01567de63e00de4cbd3684268d8f9fd5", Pod:"coredns-674b8bbfcf-5kmzn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c498cefbc8", MAC:"ae:1a:de:30:37:20", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:01.413453 containerd[1607]: 2025-11-04 23:56:01.390 [INFO][4545] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cb0b0ddef1f509c4f6e138645be9ee7e01567de63e00de4cbd3684268d8f9fd5" Namespace="kube-system" Pod="coredns-674b8bbfcf-5kmzn" WorkloadEndpoint="ci--4487.0.0--n--936e1cfeba-k8s-coredns--674b8bbfcf--5kmzn-eth0" Nov 4 23:56:01.445264 containerd[1607]: time="2025-11-04T23:56:01.443589329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2gxn6,Uid:c8424ca1-9bdd-4f3c-ba8d-b16c31b9ab19,Namespace:kube-system,Attempt:0,} returns sandbox id \"879759f1e9862a9d2a39d90ea92ff37a2c69ecd4797c1123bd5a7958023731f8\"" Nov 4 23:56:01.449853 kubelet[2775]: E1104 23:56:01.449222 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:56:01.459387 systemd[1]: Started cri-containerd-2a9d34527572efed837bad306d5ef35726e46840d0197894f20e9fdcf5753743.scope - libcontainer container 2a9d34527572efed837bad306d5ef35726e46840d0197894f20e9fdcf5753743. Nov 4 23:56:01.519506 containerd[1607]: time="2025-11-04T23:56:01.518553064Z" level=info msg="CreateContainer within sandbox \"879759f1e9862a9d2a39d90ea92ff37a2c69ecd4797c1123bd5a7958023731f8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 23:56:01.532967 containerd[1607]: time="2025-11-04T23:56:01.532889558Z" level=info msg="connecting to shim cb0b0ddef1f509c4f6e138645be9ee7e01567de63e00de4cbd3684268d8f9fd5" address="unix:///run/containerd/s/3f436e1bea212d5a9890ca65324f2c57efe9d525381e521eda5c19c8b4d98302" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:01.553757 containerd[1607]: time="2025-11-04T23:56:01.553047222Z" level=info msg="Container 2d72cfc84532c18aa574197e6f1699985d5a246a598023fb6b7b9a5cbc011463: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:01.564891 containerd[1607]: time="2025-11-04T23:56:01.564741344Z" level=info msg="CreateContainer within sandbox \"879759f1e9862a9d2a39d90ea92ff37a2c69ecd4797c1123bd5a7958023731f8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2d72cfc84532c18aa574197e6f1699985d5a246a598023fb6b7b9a5cbc011463\"" Nov 4 23:56:01.566464 containerd[1607]: time="2025-11-04T23:56:01.566417338Z" level=info msg="StartContainer for \"2d72cfc84532c18aa574197e6f1699985d5a246a598023fb6b7b9a5cbc011463\"" Nov 4 23:56:01.568614 containerd[1607]: time="2025-11-04T23:56:01.568564344Z" level=info msg="connecting to shim 2d72cfc84532c18aa574197e6f1699985d5a246a598023fb6b7b9a5cbc011463" address="unix:///run/containerd/s/e53b42b1565f86cc60c10113724b75790bfa7539a1e14abd03121c14161b464b" protocol=ttrpc version=3 Nov 4 23:56:01.601645 systemd[1]: Started cri-containerd-cb0b0ddef1f509c4f6e138645be9ee7e01567de63e00de4cbd3684268d8f9fd5.scope - libcontainer container cb0b0ddef1f509c4f6e138645be9ee7e01567de63e00de4cbd3684268d8f9fd5. Nov 4 23:56:01.633307 systemd[1]: Started cri-containerd-2d72cfc84532c18aa574197e6f1699985d5a246a598023fb6b7b9a5cbc011463.scope - libcontainer container 2d72cfc84532c18aa574197e6f1699985d5a246a598023fb6b7b9a5cbc011463. Nov 4 23:56:01.707110 containerd[1607]: time="2025-11-04T23:56:01.706755952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gjgtk,Uid:ae96bfe9-1b65-45cb-977e-23d44d98b741,Namespace:calico-system,Attempt:0,} returns sandbox id \"2a9d34527572efed837bad306d5ef35726e46840d0197894f20e9fdcf5753743\"" Nov 4 23:56:01.717910 containerd[1607]: time="2025-11-04T23:56:01.717303561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 23:56:01.787459 containerd[1607]: time="2025-11-04T23:56:01.787415538Z" level=info msg="StartContainer for \"2d72cfc84532c18aa574197e6f1699985d5a246a598023fb6b7b9a5cbc011463\" returns successfully" Nov 4 23:56:01.807481 containerd[1607]: time="2025-11-04T23:56:01.807423537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5kmzn,Uid:f648fe07-b491-4dfe-97d4-96bc7bd0b7c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb0b0ddef1f509c4f6e138645be9ee7e01567de63e00de4cbd3684268d8f9fd5\"" Nov 4 23:56:01.809494 kubelet[2775]: E1104 23:56:01.809451 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:56:01.816564 containerd[1607]: time="2025-11-04T23:56:01.815807622Z" level=info msg="CreateContainer within sandbox \"cb0b0ddef1f509c4f6e138645be9ee7e01567de63e00de4cbd3684268d8f9fd5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 23:56:01.826623 containerd[1607]: time="2025-11-04T23:56:01.826558037Z" level=info msg="Container 2015a062f09b489e6243cfcf31c8ae008e008fc8babb6306e5ba1c5549bb4b22: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:01.836270 containerd[1607]: time="2025-11-04T23:56:01.836208538Z" level=info msg="CreateContainer within sandbox \"cb0b0ddef1f509c4f6e138645be9ee7e01567de63e00de4cbd3684268d8f9fd5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2015a062f09b489e6243cfcf31c8ae008e008fc8babb6306e5ba1c5549bb4b22\"" Nov 4 23:56:01.839446 containerd[1607]: time="2025-11-04T23:56:01.839319504Z" level=info msg="StartContainer for \"2015a062f09b489e6243cfcf31c8ae008e008fc8babb6306e5ba1c5549bb4b22\"" Nov 4 23:56:01.841043 containerd[1607]: time="2025-11-04T23:56:01.840908297Z" level=info msg="connecting to shim 2015a062f09b489e6243cfcf31c8ae008e008fc8babb6306e5ba1c5549bb4b22" address="unix:///run/containerd/s/3f436e1bea212d5a9890ca65324f2c57efe9d525381e521eda5c19c8b4d98302" protocol=ttrpc version=3 Nov 4 23:56:01.881232 systemd[1]: Started cri-containerd-2015a062f09b489e6243cfcf31c8ae008e008fc8babb6306e5ba1c5549bb4b22.scope - libcontainer container 2015a062f09b489e6243cfcf31c8ae008e008fc8babb6306e5ba1c5549bb4b22. Nov 4 23:56:01.950052 containerd[1607]: time="2025-11-04T23:56:01.949944276Z" level=info msg="StartContainer for \"2015a062f09b489e6243cfcf31c8ae008e008fc8babb6306e5ba1c5549bb4b22\" returns successfully" Nov 4 23:56:02.045667 containerd[1607]: time="2025-11-04T23:56:02.045278947Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:02.046994 containerd[1607]: time="2025-11-04T23:56:02.046617971Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 23:56:02.047263 containerd[1607]: time="2025-11-04T23:56:02.047212762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 4 23:56:02.047788 kubelet[2775]: E1104 23:56:02.047627 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:56:02.048418 kubelet[2775]: E1104 23:56:02.048372 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:56:02.049099 kubelet[2775]: E1104 23:56:02.048869 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2kgv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gjgtk_calico-system(ae96bfe9-1b65-45cb-977e-23d44d98b741): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:02.050354 kubelet[2775]: E1104 23:56:02.050293 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gjgtk" podUID="ae96bfe9-1b65-45cb-977e-23d44d98b741" Nov 4 23:56:02.238688 kubelet[2775]: E1104 23:56:02.237938 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gjgtk" podUID="ae96bfe9-1b65-45cb-977e-23d44d98b741" Nov 4 23:56:02.241590 kubelet[2775]: E1104 23:56:02.240603 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:56:02.247673 kubelet[2775]: E1104 23:56:02.247523 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:56:02.307104 systemd-networkd[1483]: cali5828ff7e535: Gained IPv6LL Nov 4 23:56:02.336528 kubelet[2775]: I1104 23:56:02.335717 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2gxn6" podStartSLOduration=46.325337945 podStartE2EDuration="46.325337945s" podCreationTimestamp="2025-11-04 23:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:56:02.296506359 +0000 UTC m=+51.753429810" watchObservedRunningTime="2025-11-04 23:56:02.325337945 +0000 UTC m=+51.782261387" Nov 4 23:56:02.336528 kubelet[2775]: I1104 23:56:02.336251 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5kmzn" podStartSLOduration=46.336236564000004 podStartE2EDuration="46.336236564s" podCreationTimestamp="2025-11-04 23:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:56:02.335970408 +0000 UTC m=+51.792893840" watchObservedRunningTime="2025-11-04 23:56:02.336236564 +0000 UTC m=+51.793160004" Nov 4 23:56:02.500031 systemd-networkd[1483]: cali6c498cefbc8: Gained IPv6LL Nov 4 23:56:02.691221 systemd-networkd[1483]: calib63fcdd1cfb: Gained IPv6LL Nov 4 23:56:03.251860 kubelet[2775]: E1104 23:56:03.251337 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:56:03.254864 kubelet[2775]: E1104 23:56:03.253047 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:56:03.256429 kubelet[2775]: E1104 23:56:03.256349 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gjgtk" podUID="ae96bfe9-1b65-45cb-977e-23d44d98b741" Nov 4 23:56:04.254365 kubelet[2775]: E1104 23:56:04.254002 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:56:04.254365 kubelet[2775]: E1104 23:56:04.254215 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:56:05.529233 systemd[1]: Started sshd@7-64.227.96.36:22-139.178.89.65:42250.service - OpenSSH per-connection server daemon (139.178.89.65:42250). Nov 4 23:56:05.673960 sshd[4850]: Accepted publickey for core from 139.178.89.65 port 42250 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:56:05.676892 sshd-session[4850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:05.684390 systemd-logind[1574]: New session 8 of user core. Nov 4 23:56:05.691213 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 4 23:56:06.265158 sshd[4854]: Connection closed by 139.178.89.65 port 42250 Nov 4 23:56:06.265660 sshd-session[4850]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:06.276200 systemd-logind[1574]: Session 8 logged out. Waiting for processes to exit. Nov 4 23:56:06.276636 systemd[1]: sshd@7-64.227.96.36:22-139.178.89.65:42250.service: Deactivated successfully. Nov 4 23:56:06.279943 systemd[1]: session-8.scope: Deactivated successfully. Nov 4 23:56:06.283337 systemd-logind[1574]: Removed session 8. Nov 4 23:56:08.780915 containerd[1607]: time="2025-11-04T23:56:08.780352564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 23:56:09.087183 containerd[1607]: time="2025-11-04T23:56:09.087072988Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:09.095615 containerd[1607]: time="2025-11-04T23:56:09.095537760Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 23:56:09.096169 containerd[1607]: time="2025-11-04T23:56:09.095567027Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 4 23:56:09.096410 kubelet[2775]: E1104 23:56:09.095819 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:56:09.096410 kubelet[2775]: E1104 23:56:09.095901 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:56:09.096410 kubelet[2775]: E1104 23:56:09.096096 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bd3b3f069a2d4d429e6529ab052ead9b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lckqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7ff6944958-gl9d8_calico-system(911d355e-45cf-436d-93f0-7eb9940b9506): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:09.099589 containerd[1607]: time="2025-11-04T23:56:09.099429991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 23:56:09.440941 containerd[1607]: time="2025-11-04T23:56:09.440723256Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:09.441543 containerd[1607]: time="2025-11-04T23:56:09.441465988Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 23:56:09.441725 containerd[1607]: time="2025-11-04T23:56:09.441575382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 4 23:56:09.441966 kubelet[2775]: E1104 23:56:09.441915 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:56:09.442352 kubelet[2775]: E1104 23:56:09.442318 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:56:09.442562 kubelet[2775]: E1104 23:56:09.442501 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lckqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7ff6944958-gl9d8_calico-system(911d355e-45cf-436d-93f0-7eb9940b9506): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:09.443972 kubelet[2775]: E1104 23:56:09.443901 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7ff6944958-gl9d8" podUID="911d355e-45cf-436d-93f0-7eb9940b9506" Nov 4 23:56:09.778840 containerd[1607]: time="2025-11-04T23:56:09.778656344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 23:56:10.129146 containerd[1607]: time="2025-11-04T23:56:10.129087921Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:10.129931 containerd[1607]: time="2025-11-04T23:56:10.129872696Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 23:56:10.130139 containerd[1607]: time="2025-11-04T23:56:10.129990679Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 4 23:56:10.130409 kubelet[2775]: E1104 23:56:10.130351 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:56:10.131162 kubelet[2775]: E1104 23:56:10.130817 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:56:10.131418 kubelet[2775]: E1104 23:56:10.131354 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7shc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8sppm_calico-system(c9154d8d-6fa3-4eb3-9ec8-93848d59c99a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:10.134739 containerd[1607]: time="2025-11-04T23:56:10.134626444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 23:56:10.470501 containerd[1607]: time="2025-11-04T23:56:10.470340206Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:10.471242 containerd[1607]: time="2025-11-04T23:56:10.471184595Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 23:56:10.471436 containerd[1607]: time="2025-11-04T23:56:10.471217968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 4 23:56:10.471642 kubelet[2775]: E1104 23:56:10.471585 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:56:10.471758 kubelet[2775]: E1104 23:56:10.471657 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:56:10.472992 kubelet[2775]: E1104 23:56:10.471862 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7shc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8sppm_calico-system(c9154d8d-6fa3-4eb3-9ec8-93848d59c99a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:10.473258 kubelet[2775]: E1104 23:56:10.473218 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8sppm" podUID="c9154d8d-6fa3-4eb3-9ec8-93848d59c99a" Nov 4 23:56:11.287361 systemd[1]: Started sshd@8-64.227.96.36:22-139.178.89.65:44000.service - OpenSSH per-connection server daemon (139.178.89.65:44000). Nov 4 23:56:11.383666 sshd[4878]: Accepted publickey for core from 139.178.89.65 port 44000 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:56:11.385388 sshd-session[4878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:11.391457 systemd-logind[1574]: New session 9 of user core. Nov 4 23:56:11.397140 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 4 23:56:11.583476 sshd[4881]: Connection closed by 139.178.89.65 port 44000 Nov 4 23:56:11.583986 sshd-session[4878]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:11.591000 systemd[1]: sshd@8-64.227.96.36:22-139.178.89.65:44000.service: Deactivated successfully. Nov 4 23:56:11.596940 systemd[1]: session-9.scope: Deactivated successfully. Nov 4 23:56:11.599073 systemd-logind[1574]: Session 9 logged out. Waiting for processes to exit. Nov 4 23:56:11.602377 systemd-logind[1574]: Removed session 9. Nov 4 23:56:11.780265 containerd[1607]: time="2025-11-04T23:56:11.779653407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:56:12.111116 containerd[1607]: time="2025-11-04T23:56:12.110899158Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:12.111843 containerd[1607]: time="2025-11-04T23:56:12.111792184Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:56:12.112326 containerd[1607]: time="2025-11-04T23:56:12.111850857Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:56:12.112757 kubelet[2775]: E1104 23:56:12.112438 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:56:12.112757 kubelet[2775]: E1104 23:56:12.112504 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:56:12.113691 kubelet[2775]: E1104 23:56:12.113295 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p75tg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6fb7b7f48c-4gxxx_calico-apiserver(2ae466a7-998d-43d9-8b5d-5ca3ee8d4af6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:12.114862 kubelet[2775]: E1104 23:56:12.114486 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb7b7f48c-4gxxx" podUID="2ae466a7-998d-43d9-8b5d-5ca3ee8d4af6" Nov 4 23:56:12.780619 containerd[1607]: time="2025-11-04T23:56:12.780545837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 23:56:13.098676 containerd[1607]: time="2025-11-04T23:56:13.098626559Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:13.099698 containerd[1607]: time="2025-11-04T23:56:13.099601471Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 23:56:13.099698 containerd[1607]: time="2025-11-04T23:56:13.099659553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 4 23:56:13.099936 kubelet[2775]: E1104 23:56:13.099899 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:56:13.100014 kubelet[2775]: E1104 23:56:13.099952 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:56:13.100286 kubelet[2775]: E1104 23:56:13.100198 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wsttl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7d5bd6bf98-l6dql_calico-system(9087d4ae-63b3-470b-8bf4-d4e7bf32985a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:13.101055 containerd[1607]: time="2025-11-04T23:56:13.101002867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:56:13.101468 kubelet[2775]: E1104 23:56:13.101428 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5bd6bf98-l6dql" podUID="9087d4ae-63b3-470b-8bf4-d4e7bf32985a" Nov 4 23:56:13.452271 containerd[1607]: time="2025-11-04T23:56:13.452129674Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:13.453024 containerd[1607]: time="2025-11-04T23:56:13.452983529Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:56:13.453134 containerd[1607]: time="2025-11-04T23:56:13.453076488Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:56:13.453278 kubelet[2775]: E1104 23:56:13.453240 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:56:13.453588 kubelet[2775]: E1104 23:56:13.453295 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:56:13.453588 kubelet[2775]: E1104 23:56:13.453430 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v7gnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6fb7b7f48c-cbmng_calico-apiserver(3251e39b-c4eb-4874-a146-0948813f5507): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:13.454991 kubelet[2775]: E1104 23:56:13.454932 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb7b7f48c-cbmng" podUID="3251e39b-c4eb-4874-a146-0948813f5507" Nov 4 23:56:16.598721 systemd[1]: Started sshd@9-64.227.96.36:22-139.178.89.65:39838.service - OpenSSH per-connection server daemon (139.178.89.65:39838). Nov 4 23:56:16.683623 sshd[4898]: Accepted publickey for core from 139.178.89.65 port 39838 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:56:16.685984 sshd-session[4898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:16.692125 systemd-logind[1574]: New session 10 of user core. Nov 4 23:56:16.702179 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 4 23:56:16.857440 sshd[4901]: Connection closed by 139.178.89.65 port 39838 Nov 4 23:56:16.858365 sshd-session[4898]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:16.870438 systemd[1]: sshd@9-64.227.96.36:22-139.178.89.65:39838.service: Deactivated successfully. Nov 4 23:56:16.873287 systemd[1]: session-10.scope: Deactivated successfully. Nov 4 23:56:16.875948 systemd-logind[1574]: Session 10 logged out. Waiting for processes to exit. Nov 4 23:56:16.879132 systemd[1]: Started sshd@10-64.227.96.36:22-139.178.89.65:39850.service - OpenSSH per-connection server daemon (139.178.89.65:39850). Nov 4 23:56:16.882537 systemd-logind[1574]: Removed session 10. Nov 4 23:56:16.947890 sshd[4916]: Accepted publickey for core from 139.178.89.65 port 39850 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:56:16.950941 sshd-session[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:16.958485 systemd-logind[1574]: New session 11 of user core. Nov 4 23:56:16.965258 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 4 23:56:17.153938 sshd[4919]: Connection closed by 139.178.89.65 port 39850 Nov 4 23:56:17.155301 sshd-session[4916]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:17.173723 systemd[1]: sshd@10-64.227.96.36:22-139.178.89.65:39850.service: Deactivated successfully. Nov 4 23:56:17.180026 systemd[1]: session-11.scope: Deactivated successfully. Nov 4 23:56:17.184105 systemd-logind[1574]: Session 11 logged out. Waiting for processes to exit. Nov 4 23:56:17.189815 systemd-logind[1574]: Removed session 11. Nov 4 23:56:17.195467 systemd[1]: Started sshd@11-64.227.96.36:22-139.178.89.65:39866.service - OpenSSH per-connection server daemon (139.178.89.65:39866). Nov 4 23:56:17.278892 sshd[4929]: Accepted publickey for core from 139.178.89.65 port 39866 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:56:17.280667 sshd-session[4929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:17.287245 systemd-logind[1574]: New session 12 of user core. Nov 4 23:56:17.294282 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 4 23:56:17.438249 sshd[4932]: Connection closed by 139.178.89.65 port 39866 Nov 4 23:56:17.439205 sshd-session[4929]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:17.445794 systemd[1]: sshd@11-64.227.96.36:22-139.178.89.65:39866.service: Deactivated successfully. Nov 4 23:56:17.449019 systemd[1]: session-12.scope: Deactivated successfully. Nov 4 23:56:17.450464 systemd-logind[1574]: Session 12 logged out. Waiting for processes to exit. Nov 4 23:56:17.452325 systemd-logind[1574]: Removed session 12. Nov 4 23:56:18.778562 containerd[1607]: time="2025-11-04T23:56:18.778515348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 23:56:19.102132 containerd[1607]: time="2025-11-04T23:56:19.102054091Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:19.103024 containerd[1607]: time="2025-11-04T23:56:19.102948065Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 23:56:19.103172 containerd[1607]: time="2025-11-04T23:56:19.103053062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 4 23:56:19.103372 kubelet[2775]: E1104 23:56:19.103261 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:56:19.104919 kubelet[2775]: E1104 23:56:19.103395 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:56:19.104919 kubelet[2775]: E1104 23:56:19.103588 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2kgv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gjgtk_calico-system(ae96bfe9-1b65-45cb-977e-23d44d98b741): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:19.105383 kubelet[2775]: E1104 23:56:19.105183 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gjgtk" podUID="ae96bfe9-1b65-45cb-977e-23d44d98b741" Nov 4 23:56:21.786987 kubelet[2775]: E1104 23:56:21.786888 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8sppm" podUID="c9154d8d-6fa3-4eb3-9ec8-93848d59c99a" Nov 4 23:56:22.455494 systemd[1]: Started sshd@12-64.227.96.36:22-139.178.89.65:39870.service - OpenSSH per-connection server daemon (139.178.89.65:39870). Nov 4 23:56:22.528096 sshd[4952]: Accepted publickey for core from 139.178.89.65 port 39870 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:56:22.529781 sshd-session[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:22.536125 systemd-logind[1574]: New session 13 of user core. Nov 4 23:56:22.542260 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 4 23:56:22.695864 sshd[4955]: Connection closed by 139.178.89.65 port 39870 Nov 4 23:56:22.696769 sshd-session[4952]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:22.704660 systemd-logind[1574]: Session 13 logged out. Waiting for processes to exit. Nov 4 23:56:22.705399 systemd[1]: sshd@12-64.227.96.36:22-139.178.89.65:39870.service: Deactivated successfully. Nov 4 23:56:22.714219 systemd[1]: session-13.scope: Deactivated successfully. Nov 4 23:56:22.719382 systemd-logind[1574]: Removed session 13. Nov 4 23:56:22.778561 kubelet[2775]: E1104 23:56:22.778482 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:56:22.786614 kubelet[2775]: E1104 23:56:22.786497 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7ff6944958-gl9d8" podUID="911d355e-45cf-436d-93f0-7eb9940b9506" Nov 4 23:56:23.273905 containerd[1607]: time="2025-11-04T23:56:23.273687197Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d70b05d9336b385e964ea6b92a04e84f59963753645cd78a64d2e44d8476ee64\" id:\"c6f838ec424adea23e3cdcd53f0942b9562672e55cf29d0605ec701b949a40be\" pid:4984 exit_status:1 exited_at:{seconds:1762300583 nanos:272944210}" Nov 4 23:56:23.777698 kubelet[2775]: E1104 23:56:23.777627 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5bd6bf98-l6dql" podUID="9087d4ae-63b3-470b-8bf4-d4e7bf32985a" Nov 4 23:56:26.779104 kubelet[2775]: E1104 23:56:26.778988 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb7b7f48c-4gxxx" podUID="2ae466a7-998d-43d9-8b5d-5ca3ee8d4af6" Nov 4 23:56:27.713199 systemd[1]: Started sshd@13-64.227.96.36:22-139.178.89.65:40292.service - OpenSSH per-connection server daemon (139.178.89.65:40292). Nov 4 23:56:27.780591 kubelet[2775]: E1104 23:56:27.780540 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb7b7f48c-cbmng" podUID="3251e39b-c4eb-4874-a146-0948813f5507" Nov 4 23:56:27.847876 sshd[5000]: Accepted publickey for core from 139.178.89.65 port 40292 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:56:27.849130 sshd-session[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:27.854318 systemd-logind[1574]: New session 14 of user core. Nov 4 23:56:27.859141 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 4 23:56:28.046001 sshd[5004]: Connection closed by 139.178.89.65 port 40292 Nov 4 23:56:28.048255 sshd-session[5000]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:28.053426 systemd-logind[1574]: Session 14 logged out. Waiting for processes to exit. Nov 4 23:56:28.053523 systemd[1]: sshd@13-64.227.96.36:22-139.178.89.65:40292.service: Deactivated successfully. Nov 4 23:56:28.055983 systemd[1]: session-14.scope: Deactivated successfully. Nov 4 23:56:28.059672 systemd-logind[1574]: Removed session 14. Nov 4 23:56:29.777920 kubelet[2775]: E1104 23:56:29.777857 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gjgtk" podUID="ae96bfe9-1b65-45cb-977e-23d44d98b741" Nov 4 23:56:33.071290 systemd[1]: Started sshd@14-64.227.96.36:22-139.178.89.65:40300.service - OpenSSH per-connection server daemon (139.178.89.65:40300). Nov 4 23:56:33.164240 sshd[5017]: Accepted publickey for core from 139.178.89.65 port 40300 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:56:33.167210 sshd-session[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:33.173694 systemd-logind[1574]: New session 15 of user core. Nov 4 23:56:33.181145 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 4 23:56:33.344047 sshd[5020]: Connection closed by 139.178.89.65 port 40300 Nov 4 23:56:33.345265 sshd-session[5017]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:33.350865 systemd[1]: sshd@14-64.227.96.36:22-139.178.89.65:40300.service: Deactivated successfully. Nov 4 23:56:33.353759 systemd[1]: session-15.scope: Deactivated successfully. Nov 4 23:56:33.355317 systemd-logind[1574]: Session 15 logged out. Waiting for processes to exit. Nov 4 23:56:33.357690 systemd-logind[1574]: Removed session 15. Nov 4 23:56:33.776976 kubelet[2775]: E1104 23:56:33.776366 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:56:33.779939 containerd[1607]: time="2025-11-04T23:56:33.779878684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 23:56:34.093434 containerd[1607]: time="2025-11-04T23:56:34.093380473Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:34.094230 containerd[1607]: time="2025-11-04T23:56:34.094161619Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 23:56:34.094664 containerd[1607]: time="2025-11-04T23:56:34.094272847Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 4 23:56:34.094732 kubelet[2775]: E1104 23:56:34.094567 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:56:34.094732 kubelet[2775]: E1104 23:56:34.094630 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:56:34.095866 kubelet[2775]: E1104 23:56:34.094757 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bd3b3f069a2d4d429e6529ab052ead9b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lckqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7ff6944958-gl9d8_calico-system(911d355e-45cf-436d-93f0-7eb9940b9506): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:34.097399 containerd[1607]: time="2025-11-04T23:56:34.097367552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 23:56:34.426114 containerd[1607]: time="2025-11-04T23:56:34.425506565Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:34.426457 containerd[1607]: time="2025-11-04T23:56:34.426382726Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 23:56:34.426457 containerd[1607]: time="2025-11-04T23:56:34.426427307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 4 23:56:34.426930 kubelet[2775]: E1104 23:56:34.426664 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:56:34.426930 kubelet[2775]: E1104 23:56:34.426716 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:56:34.426930 kubelet[2775]: E1104 23:56:34.426875 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lckqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7ff6944958-gl9d8_calico-system(911d355e-45cf-436d-93f0-7eb9940b9506): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:34.428075 kubelet[2775]: E1104 23:56:34.428000 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7ff6944958-gl9d8" podUID="911d355e-45cf-436d-93f0-7eb9940b9506" Nov 4 23:56:34.780456 containerd[1607]: time="2025-11-04T23:56:34.780150743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 23:56:35.135280 containerd[1607]: time="2025-11-04T23:56:35.135205396Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:35.136164 containerd[1607]: time="2025-11-04T23:56:35.136084512Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 23:56:35.137433 containerd[1607]: time="2025-11-04T23:56:35.136236972Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 4 23:56:35.137549 kubelet[2775]: E1104 23:56:35.136518 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:56:35.137549 kubelet[2775]: E1104 23:56:35.136582 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:56:35.138920 kubelet[2775]: E1104 23:56:35.136776 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7shc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8sppm_calico-system(c9154d8d-6fa3-4eb3-9ec8-93848d59c99a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:35.143716 containerd[1607]: time="2025-11-04T23:56:35.143662808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 23:56:35.499754 containerd[1607]: time="2025-11-04T23:56:35.499323772Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:35.500298 containerd[1607]: time="2025-11-04T23:56:35.500236189Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 23:56:35.500967 containerd[1607]: time="2025-11-04T23:56:35.500254631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 4 23:56:35.501324 kubelet[2775]: E1104 23:56:35.500524 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:56:35.501324 kubelet[2775]: E1104 23:56:35.500573 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:56:35.501324 kubelet[2775]: E1104 23:56:35.500704 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7shc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8sppm_calico-system(c9154d8d-6fa3-4eb3-9ec8-93848d59c99a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:35.501954 kubelet[2775]: E1104 23:56:35.501921 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8sppm" podUID="c9154d8d-6fa3-4eb3-9ec8-93848d59c99a" Nov 4 23:56:35.777972 kubelet[2775]: E1104 23:56:35.776537 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:56:36.779584 containerd[1607]: time="2025-11-04T23:56:36.779540325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 23:56:37.105001 containerd[1607]: time="2025-11-04T23:56:37.104942864Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:37.105708 containerd[1607]: time="2025-11-04T23:56:37.105653044Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 23:56:37.105938 containerd[1607]: time="2025-11-04T23:56:37.105749473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 4 23:56:37.106220 kubelet[2775]: E1104 23:56:37.106159 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:56:37.106665 kubelet[2775]: E1104 23:56:37.106227 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:56:37.106665 kubelet[2775]: E1104 23:56:37.106404 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wsttl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7d5bd6bf98-l6dql_calico-system(9087d4ae-63b3-470b-8bf4-d4e7bf32985a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:37.107704 kubelet[2775]: E1104 23:56:37.107650 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5bd6bf98-l6dql" podUID="9087d4ae-63b3-470b-8bf4-d4e7bf32985a" Nov 4 23:56:37.778170 containerd[1607]: time="2025-11-04T23:56:37.777871494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:56:38.117038 containerd[1607]: time="2025-11-04T23:56:38.116979130Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:38.117884 containerd[1607]: time="2025-11-04T23:56:38.117813664Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:56:38.118004 containerd[1607]: time="2025-11-04T23:56:38.117856975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:56:38.118195 kubelet[2775]: E1104 23:56:38.118136 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:56:38.118484 kubelet[2775]: E1104 23:56:38.118206 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:56:38.118484 kubelet[2775]: E1104 23:56:38.118392 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p75tg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6fb7b7f48c-4gxxx_calico-apiserver(2ae466a7-998d-43d9-8b5d-5ca3ee8d4af6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:38.120075 kubelet[2775]: E1104 23:56:38.120031 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb7b7f48c-4gxxx" podUID="2ae466a7-998d-43d9-8b5d-5ca3ee8d4af6" Nov 4 23:56:38.363689 systemd[1]: Started sshd@15-64.227.96.36:22-139.178.89.65:53684.service - OpenSSH per-connection server daemon (139.178.89.65:53684). Nov 4 23:56:38.479791 sshd[5039]: Accepted publickey for core from 139.178.89.65 port 53684 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:56:38.482061 sshd-session[5039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:38.487940 systemd-logind[1574]: New session 16 of user core. Nov 4 23:56:38.493106 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 4 23:56:38.783196 containerd[1607]: time="2025-11-04T23:56:38.782350176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:56:38.795028 sshd[5042]: Connection closed by 139.178.89.65 port 53684 Nov 4 23:56:38.795448 sshd-session[5039]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:38.814177 systemd[1]: sshd@15-64.227.96.36:22-139.178.89.65:53684.service: Deactivated successfully. Nov 4 23:56:38.817846 systemd[1]: session-16.scope: Deactivated successfully. Nov 4 23:56:38.819927 systemd-logind[1574]: Session 16 logged out. Waiting for processes to exit. Nov 4 23:56:38.824715 systemd[1]: Started sshd@16-64.227.96.36:22-139.178.89.65:53694.service - OpenSSH per-connection server daemon (139.178.89.65:53694). Nov 4 23:56:38.828131 systemd-logind[1574]: Removed session 16. Nov 4 23:56:38.882764 sshd[5054]: Accepted publickey for core from 139.178.89.65 port 53694 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:56:38.884988 sshd-session[5054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:38.892997 systemd-logind[1574]: New session 17 of user core. Nov 4 23:56:38.897101 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 4 23:56:39.134198 containerd[1607]: time="2025-11-04T23:56:39.133993978Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:39.135232 containerd[1607]: time="2025-11-04T23:56:39.134773652Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:56:39.135232 containerd[1607]: time="2025-11-04T23:56:39.134863629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:56:39.135356 kubelet[2775]: E1104 23:56:39.135081 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:56:39.135356 kubelet[2775]: E1104 23:56:39.135167 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:56:39.135777 kubelet[2775]: E1104 23:56:39.135371 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v7gnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6fb7b7f48c-cbmng_calico-apiserver(3251e39b-c4eb-4874-a146-0948813f5507): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:39.137229 kubelet[2775]: E1104 23:56:39.136982 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb7b7f48c-cbmng" podUID="3251e39b-c4eb-4874-a146-0948813f5507" Nov 4 23:56:39.277115 sshd[5057]: Connection closed by 139.178.89.65 port 53694 Nov 4 23:56:39.278246 sshd-session[5054]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:39.293880 systemd[1]: sshd@16-64.227.96.36:22-139.178.89.65:53694.service: Deactivated successfully. Nov 4 23:56:39.298723 systemd[1]: session-17.scope: Deactivated successfully. Nov 4 23:56:39.301205 systemd-logind[1574]: Session 17 logged out. Waiting for processes to exit. Nov 4 23:56:39.306513 systemd[1]: Started sshd@17-64.227.96.36:22-139.178.89.65:53710.service - OpenSSH per-connection server daemon (139.178.89.65:53710). Nov 4 23:56:39.308007 systemd-logind[1574]: Removed session 17. Nov 4 23:56:39.409370 sshd[5067]: Accepted publickey for core from 139.178.89.65 port 53710 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:56:39.411195 sshd-session[5067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:39.416560 systemd-logind[1574]: New session 18 of user core. Nov 4 23:56:39.424094 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 4 23:56:40.090774 sshd[5070]: Connection closed by 139.178.89.65 port 53710 Nov 4 23:56:40.092652 sshd-session[5067]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:40.102708 systemd[1]: sshd@17-64.227.96.36:22-139.178.89.65:53710.service: Deactivated successfully. Nov 4 23:56:40.109382 systemd[1]: session-18.scope: Deactivated successfully. Nov 4 23:56:40.112737 systemd-logind[1574]: Session 18 logged out. Waiting for processes to exit. Nov 4 23:56:40.120378 systemd[1]: Started sshd@18-64.227.96.36:22-139.178.89.65:53714.service - OpenSSH per-connection server daemon (139.178.89.65:53714). Nov 4 23:56:40.123365 systemd-logind[1574]: Removed session 18. Nov 4 23:56:40.226602 sshd[5087]: Accepted publickey for core from 139.178.89.65 port 53714 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:56:40.229128 sshd-session[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:40.236440 systemd-logind[1574]: New session 19 of user core. Nov 4 23:56:40.246123 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 4 23:56:40.719192 sshd[5090]: Connection closed by 139.178.89.65 port 53714 Nov 4 23:56:40.720066 sshd-session[5087]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:40.737386 systemd[1]: sshd@18-64.227.96.36:22-139.178.89.65:53714.service: Deactivated successfully. Nov 4 23:56:40.740689 systemd[1]: session-19.scope: Deactivated successfully. Nov 4 23:56:40.742820 systemd-logind[1574]: Session 19 logged out. Waiting for processes to exit. Nov 4 23:56:40.752450 systemd[1]: Started sshd@19-64.227.96.36:22-139.178.89.65:53724.service - OpenSSH per-connection server daemon (139.178.89.65:53724). Nov 4 23:56:40.754952 systemd-logind[1574]: Removed session 19. Nov 4 23:56:40.856947 sshd[5100]: Accepted publickey for core from 139.178.89.65 port 53724 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:56:40.858617 sshd-session[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:40.866548 systemd-logind[1574]: New session 20 of user core. Nov 4 23:56:40.876100 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 4 23:56:41.028750 sshd[5103]: Connection closed by 139.178.89.65 port 53724 Nov 4 23:56:41.029952 sshd-session[5100]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:41.034865 systemd[1]: sshd@19-64.227.96.36:22-139.178.89.65:53724.service: Deactivated successfully. Nov 4 23:56:41.037859 systemd[1]: session-20.scope: Deactivated successfully. Nov 4 23:56:41.041587 systemd-logind[1574]: Session 20 logged out. Waiting for processes to exit. Nov 4 23:56:41.042706 systemd-logind[1574]: Removed session 20. Nov 4 23:56:42.780503 containerd[1607]: time="2025-11-04T23:56:42.780022463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 23:56:43.099018 containerd[1607]: time="2025-11-04T23:56:43.098955171Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:43.099874 containerd[1607]: time="2025-11-04T23:56:43.099808752Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 23:56:43.100109 containerd[1607]: time="2025-11-04T23:56:43.099853984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 4 23:56:43.100305 kubelet[2775]: E1104 23:56:43.100260 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:56:43.101025 kubelet[2775]: E1104 23:56:43.100708 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:56:43.101158 kubelet[2775]: E1104 23:56:43.100994 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2kgv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gjgtk_calico-system(ae96bfe9-1b65-45cb-977e-23d44d98b741): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:43.102355 kubelet[2775]: E1104 23:56:43.102282 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gjgtk" podUID="ae96bfe9-1b65-45cb-977e-23d44d98b741" Nov 4 23:56:45.776426 kubelet[2775]: E1104 23:56:45.776294 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:56:46.049465 systemd[1]: Started sshd@20-64.227.96.36:22-139.178.89.65:35952.service - OpenSSH per-connection server daemon (139.178.89.65:35952). Nov 4 23:56:46.160458 sshd[5118]: Accepted publickey for core from 139.178.89.65 port 35952 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:56:46.163548 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:46.169457 systemd-logind[1574]: New session 21 of user core. Nov 4 23:56:46.175112 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 4 23:56:46.391347 sshd[5121]: Connection closed by 139.178.89.65 port 35952 Nov 4 23:56:46.391998 sshd-session[5118]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:46.397563 systemd-logind[1574]: Session 21 logged out. Waiting for processes to exit. Nov 4 23:56:46.398073 systemd[1]: sshd@20-64.227.96.36:22-139.178.89.65:35952.service: Deactivated successfully. Nov 4 23:56:46.400158 systemd[1]: session-21.scope: Deactivated successfully. Nov 4 23:56:46.402053 systemd-logind[1574]: Removed session 21. Nov 4 23:56:47.778433 kubelet[2775]: E1104 23:56:47.778360 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5bd6bf98-l6dql" podUID="9087d4ae-63b3-470b-8bf4-d4e7bf32985a" Nov 4 23:56:48.779936 kubelet[2775]: E1104 23:56:48.779213 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb7b7f48c-4gxxx" podUID="2ae466a7-998d-43d9-8b5d-5ca3ee8d4af6" Nov 4 23:56:48.781941 kubelet[2775]: E1104 23:56:48.781884 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7ff6944958-gl9d8" podUID="911d355e-45cf-436d-93f0-7eb9940b9506" Nov 4 23:56:49.779518 kubelet[2775]: E1104 23:56:49.778737 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb7b7f48c-cbmng" podUID="3251e39b-c4eb-4874-a146-0948813f5507" Nov 4 23:56:49.780182 kubelet[2775]: E1104 23:56:49.779770 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8sppm" podUID="c9154d8d-6fa3-4eb3-9ec8-93848d59c99a" Nov 4 23:56:51.408678 systemd[1]: Started sshd@21-64.227.96.36:22-139.178.89.65:35960.service - OpenSSH per-connection server daemon (139.178.89.65:35960). Nov 4 23:56:51.506392 sshd[5136]: Accepted publickey for core from 139.178.89.65 port 35960 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:56:51.507986 sshd-session[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:51.513315 systemd-logind[1574]: New session 22 of user core. Nov 4 23:56:51.521209 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 4 23:56:51.685224 sshd[5139]: Connection closed by 139.178.89.65 port 35960 Nov 4 23:56:51.685103 sshd-session[5136]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:51.691604 systemd[1]: sshd@21-64.227.96.36:22-139.178.89.65:35960.service: Deactivated successfully. Nov 4 23:56:51.691655 systemd-logind[1574]: Session 22 logged out. Waiting for processes to exit. Nov 4 23:56:51.696053 systemd[1]: session-22.scope: Deactivated successfully. Nov 4 23:56:51.702305 systemd-logind[1574]: Removed session 22. Nov 4 23:56:53.276734 containerd[1607]: time="2025-11-04T23:56:53.276673427Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d70b05d9336b385e964ea6b92a04e84f59963753645cd78a64d2e44d8476ee64\" id:\"8605d63b9a1b34bffe2840568cdf8369cf0cca66b308b8f62d5b8c00b47dc4b6\" pid:5162 exited_at:{seconds:1762300613 nanos:276275601}" Nov 4 23:56:53.280624 kubelet[2775]: E1104 23:56:53.280588 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:56:56.698273 systemd[1]: Started sshd@22-64.227.96.36:22-139.178.89.65:38812.service - OpenSSH per-connection server daemon (139.178.89.65:38812). Nov 4 23:56:56.781938 kubelet[2775]: E1104 23:56:56.781895 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gjgtk" podUID="ae96bfe9-1b65-45cb-977e-23d44d98b741" Nov 4 23:56:56.817060 sshd[5174]: Accepted publickey for core from 139.178.89.65 port 38812 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:56:56.821244 sshd-session[5174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:56.829550 systemd-logind[1574]: New session 23 of user core. Nov 4 23:56:56.834060 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 4 23:56:57.218902 sshd[5177]: Connection closed by 139.178.89.65 port 38812 Nov 4 23:56:57.217333 sshd-session[5174]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:57.226421 systemd[1]: sshd@22-64.227.96.36:22-139.178.89.65:38812.service: Deactivated successfully. Nov 4 23:56:57.233172 systemd[1]: session-23.scope: Deactivated successfully. Nov 4 23:56:57.238601 systemd-logind[1574]: Session 23 logged out. Waiting for processes to exit. Nov 4 23:56:57.241188 systemd-logind[1574]: Removed session 23. Nov 4 23:56:59.780643 kubelet[2775]: E1104 23:56:59.780587 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5bd6bf98-l6dql" podUID="9087d4ae-63b3-470b-8bf4-d4e7bf32985a" Nov 4 23:57:00.779307 kubelet[2775]: E1104 23:57:00.779237 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb7b7f48c-4gxxx" podUID="2ae466a7-998d-43d9-8b5d-5ca3ee8d4af6" Nov 4 23:57:00.779307 kubelet[2775]: E1104 23:57:00.778787 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb7b7f48c-cbmng" podUID="3251e39b-c4eb-4874-a146-0948813f5507" Nov 4 23:57:01.780182 kubelet[2775]: E1104 23:57:01.779824 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8sppm" podUID="c9154d8d-6fa3-4eb3-9ec8-93848d59c99a"