Oct 27 08:23:50.173353 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Oct 27 06:24:35 -00 2025 Oct 27 08:23:50.173390 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=e6ac205aca0358d0b739fe2cba6f8244850dbdc9027fd8e7442161fce065515e Oct 27 08:23:50.173405 kernel: BIOS-provided physical RAM map: Oct 27 08:23:50.173412 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 27 08:23:50.173419 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 27 08:23:50.173426 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 27 08:23:50.173435 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Oct 27 08:23:50.173446 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Oct 27 08:23:50.173454 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 27 08:23:50.173464 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 27 08:23:50.173471 kernel: NX (Execute Disable) protection: active Oct 27 08:23:50.173479 kernel: APIC: Static calls initialized Oct 27 08:23:50.173486 kernel: SMBIOS 2.8 present. Oct 27 08:23:50.173494 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Oct 27 08:23:50.173503 kernel: DMI: Memory slots populated: 1/1 Oct 27 08:23:50.173514 kernel: Hypervisor detected: KVM Oct 27 08:23:50.173525 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Oct 27 08:23:50.173534 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 27 08:23:50.173542 kernel: kvm-clock: using sched offset of 3660851465 cycles Oct 27 08:23:50.173552 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 27 08:23:50.173561 kernel: tsc: Detected 2494.134 MHz processor Oct 27 08:23:50.173570 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 27 08:23:50.173579 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 27 08:23:50.173591 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Oct 27 08:23:50.173599 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 27 08:23:50.173608 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 27 08:23:50.173617 kernel: ACPI: Early table checksum verification disabled Oct 27 08:23:50.173625 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Oct 27 08:23:50.173634 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:23:50.173643 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:23:50.173655 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:23:50.173663 kernel: ACPI: FACS 0x000000007FFE0000 000040 Oct 27 08:23:50.173672 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:23:50.173681 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:23:50.173689 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:23:50.173698 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:23:50.173706 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Oct 27 08:23:50.173718 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Oct 27 08:23:50.173726 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Oct 27 08:23:50.173735 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Oct 27 08:23:50.173747 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Oct 27 08:23:50.173756 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Oct 27 08:23:50.173768 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Oct 27 08:23:50.173776 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 27 08:23:50.173785 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 27 08:23:50.173794 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Oct 27 08:23:50.173803 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Oct 27 08:23:50.173812 kernel: Zone ranges: Oct 27 08:23:50.173821 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 27 08:23:50.173833 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Oct 27 08:23:50.173841 kernel: Normal empty Oct 27 08:23:50.173850 kernel: Device empty Oct 27 08:23:50.173859 kernel: Movable zone start for each node Oct 27 08:23:50.173868 kernel: Early memory node ranges Oct 27 08:23:50.173882 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 27 08:23:50.173896 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Oct 27 08:23:50.173912 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Oct 27 08:23:50.173925 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 27 08:23:50.173938 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 27 08:23:50.173951 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Oct 27 08:23:50.173964 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 27 08:23:50.173983 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 27 08:23:50.173997 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 27 08:23:50.174017 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 27 08:23:50.174027 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 27 08:23:50.174036 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 27 08:23:50.174048 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 27 08:23:50.174057 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 27 08:23:50.174066 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 27 08:23:50.174076 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 27 08:23:50.174085 kernel: TSC deadline timer available Oct 27 08:23:50.174096 kernel: CPU topo: Max. logical packages: 1 Oct 27 08:23:50.174106 kernel: CPU topo: Max. logical dies: 1 Oct 27 08:23:50.175152 kernel: CPU topo: Max. dies per package: 1 Oct 27 08:23:50.175170 kernel: CPU topo: Max. threads per core: 1 Oct 27 08:23:50.175179 kernel: CPU topo: Num. cores per package: 2 Oct 27 08:23:50.175189 kernel: CPU topo: Num. threads per package: 2 Oct 27 08:23:50.175198 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Oct 27 08:23:50.175214 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 27 08:23:50.175223 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Oct 27 08:23:50.175232 kernel: Booting paravirtualized kernel on KVM Oct 27 08:23:50.175242 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 27 08:23:50.175251 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 27 08:23:50.175260 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Oct 27 08:23:50.175269 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Oct 27 08:23:50.175281 kernel: pcpu-alloc: [0] 0 1 Oct 27 08:23:50.175290 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 27 08:23:50.175301 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=e6ac205aca0358d0b739fe2cba6f8244850dbdc9027fd8e7442161fce065515e Oct 27 08:23:50.175311 kernel: random: crng init done Oct 27 08:23:50.175320 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 27 08:23:50.175329 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 27 08:23:50.175338 kernel: Fallback order for Node 0: 0 Oct 27 08:23:50.175350 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Oct 27 08:23:50.175359 kernel: Policy zone: DMA32 Oct 27 08:23:50.175368 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 27 08:23:50.175377 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 27 08:23:50.175386 kernel: Kernel/User page tables isolation: enabled Oct 27 08:23:50.175396 kernel: ftrace: allocating 40092 entries in 157 pages Oct 27 08:23:50.175405 kernel: ftrace: allocated 157 pages with 5 groups Oct 27 08:23:50.175416 kernel: Dynamic Preempt: voluntary Oct 27 08:23:50.175426 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 27 08:23:50.175436 kernel: rcu: RCU event tracing is enabled. Oct 27 08:23:50.175445 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 27 08:23:50.175454 kernel: Trampoline variant of Tasks RCU enabled. Oct 27 08:23:50.175463 kernel: Rude variant of Tasks RCU enabled. Oct 27 08:23:50.175472 kernel: Tracing variant of Tasks RCU enabled. Oct 27 08:23:50.175481 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 27 08:23:50.175493 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 27 08:23:50.175502 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 27 08:23:50.175516 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 27 08:23:50.175525 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 27 08:23:50.175534 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 27 08:23:50.175543 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 27 08:23:50.175552 kernel: Console: colour VGA+ 80x25 Oct 27 08:23:50.175564 kernel: printk: legacy console [tty0] enabled Oct 27 08:23:50.175573 kernel: printk: legacy console [ttyS0] enabled Oct 27 08:23:50.175583 kernel: ACPI: Core revision 20240827 Oct 27 08:23:50.175592 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 27 08:23:50.175610 kernel: APIC: Switch to symmetric I/O mode setup Oct 27 08:23:50.175622 kernel: x2apic enabled Oct 27 08:23:50.175631 kernel: APIC: Switched APIC routing to: physical x2apic Oct 27 08:23:50.175641 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 27 08:23:50.175650 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns Oct 27 08:23:50.175665 kernel: Calibrating delay loop (skipped) preset value.. 4988.26 BogoMIPS (lpj=2494134) Oct 27 08:23:50.175675 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 27 08:23:50.175685 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 27 08:23:50.175694 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 27 08:23:50.175706 kernel: Spectre V2 : Mitigation: Retpolines Oct 27 08:23:50.175716 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 27 08:23:50.175726 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Oct 27 08:23:50.175735 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 27 08:23:50.175745 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 27 08:23:50.175754 kernel: MDS: Mitigation: Clear CPU buffers Oct 27 08:23:50.175764 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 27 08:23:50.175776 kernel: active return thunk: its_return_thunk Oct 27 08:23:50.175785 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 27 08:23:50.175795 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 27 08:23:50.175805 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 27 08:23:50.175814 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 27 08:23:50.175824 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 27 08:23:50.175833 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 27 08:23:50.175845 kernel: Freeing SMP alternatives memory: 32K Oct 27 08:23:50.175855 kernel: pid_max: default: 32768 minimum: 301 Oct 27 08:23:50.175865 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 27 08:23:50.175874 kernel: landlock: Up and running. Oct 27 08:23:50.175884 kernel: SELinux: Initializing. Oct 27 08:23:50.175893 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 27 08:23:50.175903 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 27 08:23:50.175915 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Oct 27 08:23:50.175925 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Oct 27 08:23:50.175935 kernel: signal: max sigframe size: 1776 Oct 27 08:23:50.175944 kernel: rcu: Hierarchical SRCU implementation. Oct 27 08:23:50.175954 kernel: rcu: Max phase no-delay instances is 400. Oct 27 08:23:50.175963 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 27 08:23:50.175973 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 27 08:23:50.175985 kernel: smp: Bringing up secondary CPUs ... Oct 27 08:23:50.175998 kernel: smpboot: x86: Booting SMP configuration: Oct 27 08:23:50.176008 kernel: .... node #0, CPUs: #1 Oct 27 08:23:50.176018 kernel: smp: Brought up 1 node, 2 CPUs Oct 27 08:23:50.176027 kernel: smpboot: Total of 2 processors activated (9976.53 BogoMIPS) Oct 27 08:23:50.176037 kernel: Memory: 1989436K/2096612K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 102612K reserved, 0K cma-reserved) Oct 27 08:23:50.176047 kernel: devtmpfs: initialized Oct 27 08:23:50.176059 kernel: x86/mm: Memory block size: 128MB Oct 27 08:23:50.176069 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 27 08:23:50.176078 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 27 08:23:50.176088 kernel: pinctrl core: initialized pinctrl subsystem Oct 27 08:23:50.176097 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 27 08:23:50.176107 kernel: audit: initializing netlink subsys (disabled) Oct 27 08:23:50.177150 kernel: audit: type=2000 audit(1761553427.787:1): state=initialized audit_enabled=0 res=1 Oct 27 08:23:50.177170 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 27 08:23:50.177180 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 27 08:23:50.177191 kernel: cpuidle: using governor menu Oct 27 08:23:50.177201 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 27 08:23:50.177211 kernel: dca service started, version 1.12.1 Oct 27 08:23:50.177221 kernel: PCI: Using configuration type 1 for base access Oct 27 08:23:50.177231 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 27 08:23:50.177243 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 27 08:23:50.177255 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 27 08:23:50.177269 kernel: ACPI: Added _OSI(Module Device) Oct 27 08:23:50.177279 kernel: ACPI: Added _OSI(Processor Device) Oct 27 08:23:50.177288 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 27 08:23:50.177298 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 27 08:23:50.177308 kernel: ACPI: Interpreter enabled Oct 27 08:23:50.177318 kernel: ACPI: PM: (supports S0 S5) Oct 27 08:23:50.177330 kernel: ACPI: Using IOAPIC for interrupt routing Oct 27 08:23:50.177340 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 27 08:23:50.177350 kernel: PCI: Using E820 reservations for host bridge windows Oct 27 08:23:50.177364 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 27 08:23:50.177374 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 27 08:23:50.177664 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 27 08:23:50.177810 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Oct 27 08:23:50.177966 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Oct 27 08:23:50.177980 kernel: acpiphp: Slot [3] registered Oct 27 08:23:50.177990 kernel: acpiphp: Slot [4] registered Oct 27 08:23:50.178000 kernel: acpiphp: Slot [5] registered Oct 27 08:23:50.178009 kernel: acpiphp: Slot [6] registered Oct 27 08:23:50.178024 kernel: acpiphp: Slot [7] registered Oct 27 08:23:50.178034 kernel: acpiphp: Slot [8] registered Oct 27 08:23:50.178048 kernel: acpiphp: Slot [9] registered Oct 27 08:23:50.178060 kernel: acpiphp: Slot [10] registered Oct 27 08:23:50.178070 kernel: acpiphp: Slot [11] registered Oct 27 08:23:50.178080 kernel: acpiphp: Slot [12] registered Oct 27 08:23:50.178089 kernel: acpiphp: Slot [13] registered Oct 27 08:23:50.178099 kernel: acpiphp: Slot [14] registered Oct 27 08:23:50.178111 kernel: acpiphp: Slot [15] registered Oct 27 08:23:50.178752 kernel: acpiphp: Slot [16] registered Oct 27 08:23:50.178762 kernel: acpiphp: Slot [17] registered Oct 27 08:23:50.178772 kernel: acpiphp: Slot [18] registered Oct 27 08:23:50.178782 kernel: acpiphp: Slot [19] registered Oct 27 08:23:50.178791 kernel: acpiphp: Slot [20] registered Oct 27 08:23:50.178801 kernel: acpiphp: Slot [21] registered Oct 27 08:23:50.178817 kernel: acpiphp: Slot [22] registered Oct 27 08:23:50.178827 kernel: acpiphp: Slot [23] registered Oct 27 08:23:50.178837 kernel: acpiphp: Slot [24] registered Oct 27 08:23:50.178846 kernel: acpiphp: Slot [25] registered Oct 27 08:23:50.178856 kernel: acpiphp: Slot [26] registered Oct 27 08:23:50.178865 kernel: acpiphp: Slot [27] registered Oct 27 08:23:50.178875 kernel: acpiphp: Slot [28] registered Oct 27 08:23:50.178884 kernel: acpiphp: Slot [29] registered Oct 27 08:23:50.178896 kernel: acpiphp: Slot [30] registered Oct 27 08:23:50.178906 kernel: acpiphp: Slot [31] registered Oct 27 08:23:50.178916 kernel: PCI host bridge to bus 0000:00 Oct 27 08:23:50.180238 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 27 08:23:50.180954 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 27 08:23:50.181125 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 27 08:23:50.181276 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Oct 27 08:23:50.181401 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Oct 27 08:23:50.181536 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 27 08:23:50.181709 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Oct 27 08:23:50.181856 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Oct 27 08:23:50.182019 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Oct 27 08:23:50.182230 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Oct 27 08:23:50.182407 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Oct 27 08:23:50.182542 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Oct 27 08:23:50.182674 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Oct 27 08:23:50.182806 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Oct 27 08:23:50.182960 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Oct 27 08:23:50.183096 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Oct 27 08:23:50.185388 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Oct 27 08:23:50.185566 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 27 08:23:50.185701 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 27 08:23:50.185854 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Oct 27 08:23:50.185987 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Oct 27 08:23:50.186935 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Oct 27 08:23:50.187155 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Oct 27 08:23:50.187295 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Oct 27 08:23:50.187429 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 27 08:23:50.187579 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 27 08:23:50.187710 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Oct 27 08:23:50.187840 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Oct 27 08:23:50.187971 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Oct 27 08:23:50.188111 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 27 08:23:50.190210 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Oct 27 08:23:50.190356 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Oct 27 08:23:50.190490 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Oct 27 08:23:50.190638 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Oct 27 08:23:50.190772 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Oct 27 08:23:50.190903 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Oct 27 08:23:50.191040 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Oct 27 08:23:50.192294 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 27 08:23:50.192469 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Oct 27 08:23:50.192626 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Oct 27 08:23:50.192788 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Oct 27 08:23:50.192947 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 27 08:23:50.193113 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Oct 27 08:23:50.194169 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Oct 27 08:23:50.195540 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Oct 27 08:23:50.195731 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Oct 27 08:23:50.195873 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Oct 27 08:23:50.196032 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Oct 27 08:23:50.196046 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 27 08:23:50.196057 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 27 08:23:50.196067 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 27 08:23:50.196084 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 27 08:23:50.196100 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 27 08:23:50.196149 kernel: iommu: Default domain type: Translated Oct 27 08:23:50.196161 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 27 08:23:50.196171 kernel: PCI: Using ACPI for IRQ routing Oct 27 08:23:50.196181 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 27 08:23:50.196191 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 27 08:23:50.196201 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Oct 27 08:23:50.196377 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 27 08:23:50.196560 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 27 08:23:50.196766 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 27 08:23:50.196784 kernel: vgaarb: loaded Oct 27 08:23:50.196795 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 27 08:23:50.196805 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 27 08:23:50.196815 kernel: clocksource: Switched to clocksource kvm-clock Oct 27 08:23:50.196830 kernel: VFS: Disk quotas dquot_6.6.0 Oct 27 08:23:50.196846 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 27 08:23:50.196869 kernel: pnp: PnP ACPI init Oct 27 08:23:50.196885 kernel: pnp: PnP ACPI: found 4 devices Oct 27 08:23:50.196902 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 27 08:23:50.196917 kernel: NET: Registered PF_INET protocol family Oct 27 08:23:50.196933 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 27 08:23:50.196944 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 27 08:23:50.196955 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 27 08:23:50.196974 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 27 08:23:50.196989 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 27 08:23:50.197005 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 27 08:23:50.197022 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 27 08:23:50.197039 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 27 08:23:50.197055 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 27 08:23:50.197070 kernel: NET: Registered PF_XDP protocol family Oct 27 08:23:50.197287 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 27 08:23:50.197476 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 27 08:23:50.197642 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 27 08:23:50.197808 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Oct 27 08:23:50.197978 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Oct 27 08:23:50.198218 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 27 08:23:50.198446 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 27 08:23:50.198470 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 27 08:23:50.198667 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 26782 usecs Oct 27 08:23:50.198689 kernel: PCI: CLS 0 bytes, default 64 Oct 27 08:23:50.198701 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 27 08:23:50.198712 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns Oct 27 08:23:50.198721 kernel: Initialise system trusted keyrings Oct 27 08:23:50.198738 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 27 08:23:50.198752 kernel: Key type asymmetric registered Oct 27 08:23:50.198768 kernel: Asymmetric key parser 'x509' registered Oct 27 08:23:50.198785 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 27 08:23:50.198797 kernel: io scheduler mq-deadline registered Oct 27 08:23:50.198806 kernel: io scheduler kyber registered Oct 27 08:23:50.198816 kernel: io scheduler bfq registered Oct 27 08:23:50.198829 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 27 08:23:50.198839 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Oct 27 08:23:50.198849 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 27 08:23:50.198859 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 27 08:23:50.198868 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 27 08:23:50.198879 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 27 08:23:50.198889 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 27 08:23:50.198901 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 27 08:23:50.198911 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 27 08:23:50.200153 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 27 08:23:50.200180 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 27 08:23:50.200409 kernel: rtc_cmos 00:03: registered as rtc0 Oct 27 08:23:50.200580 kernel: rtc_cmos 00:03: setting system clock to 2025-10-27T08:23:48 UTC (1761553428) Oct 27 08:23:50.200800 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Oct 27 08:23:50.200824 kernel: intel_pstate: CPU model not supported Oct 27 08:23:50.200841 kernel: NET: Registered PF_INET6 protocol family Oct 27 08:23:50.200858 kernel: Segment Routing with IPv6 Oct 27 08:23:50.200873 kernel: In-situ OAM (IOAM) with IPv6 Oct 27 08:23:50.200886 kernel: NET: Registered PF_PACKET protocol family Oct 27 08:23:50.200896 kernel: Key type dns_resolver registered Oct 27 08:23:50.200911 kernel: IPI shorthand broadcast: enabled Oct 27 08:23:50.200927 kernel: sched_clock: Marking stable (1263003005, 144862142)->(1430128904, -22263757) Oct 27 08:23:50.200938 kernel: registered taskstats version 1 Oct 27 08:23:50.200954 kernel: Loading compiled-in X.509 certificates Oct 27 08:23:50.200969 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 6c7ef547b8d769f7afd2708799fb9c3145695bfb' Oct 27 08:23:50.200984 kernel: Demotion targets for Node 0: null Oct 27 08:23:50.201776 kernel: Key type .fscrypt registered Oct 27 08:23:50.201800 kernel: Key type fscrypt-provisioning registered Oct 27 08:23:50.204734 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 27 08:23:50.204768 kernel: ima: Allocated hash algorithm: sha1 Oct 27 08:23:50.204788 kernel: ima: No architecture policies found Oct 27 08:23:50.204804 kernel: clk: Disabling unused clocks Oct 27 08:23:50.204822 kernel: Freeing unused kernel image (initmem) memory: 15964K Oct 27 08:23:50.204841 kernel: Write protecting the kernel read-only data: 40960k Oct 27 08:23:50.204860 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Oct 27 08:23:50.204881 kernel: Run /init as init process Oct 27 08:23:50.204898 kernel: with arguments: Oct 27 08:23:50.204916 kernel: /init Oct 27 08:23:50.204933 kernel: with environment: Oct 27 08:23:50.204950 kernel: HOME=/ Oct 27 08:23:50.204967 kernel: TERM=linux Oct 27 08:23:50.204985 kernel: SCSI subsystem initialized Oct 27 08:23:50.205005 kernel: libata version 3.00 loaded. Oct 27 08:23:50.205368 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 27 08:23:50.205633 kernel: scsi host0: ata_piix Oct 27 08:23:50.205859 kernel: scsi host1: ata_piix Oct 27 08:23:50.205884 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Oct 27 08:23:50.205909 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Oct 27 08:23:50.205928 kernel: ACPI: bus type USB registered Oct 27 08:23:50.205946 kernel: usbcore: registered new interface driver usbfs Oct 27 08:23:50.205964 kernel: usbcore: registered new interface driver hub Oct 27 08:23:50.205982 kernel: usbcore: registered new device driver usb Oct 27 08:23:50.206614 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Oct 27 08:23:50.206846 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Oct 27 08:23:50.207064 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Oct 27 08:23:50.207301 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Oct 27 08:23:50.207559 kernel: hub 1-0:1.0: USB hub found Oct 27 08:23:50.207809 kernel: hub 1-0:1.0: 2 ports detected Oct 27 08:23:50.208070 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Oct 27 08:23:50.208856 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Oct 27 08:23:50.208888 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 27 08:23:50.208906 kernel: GPT:16515071 != 125829119 Oct 27 08:23:50.208924 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 27 08:23:50.208942 kernel: GPT:16515071 != 125829119 Oct 27 08:23:50.208966 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 27 08:23:50.208984 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 27 08:23:50.209228 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Oct 27 08:23:50.209426 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Oct 27 08:23:50.209618 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Oct 27 08:23:50.209780 kernel: scsi host2: Virtio SCSI HBA Oct 27 08:23:50.209801 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:23:50.209813 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 27 08:23:50.209823 kernel: device-mapper: uevent: version 1.0.3 Oct 27 08:23:50.209834 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 27 08:23:50.212178 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 27 08:23:50.212192 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:23:50.212208 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:23:50.212218 kernel: raid6: avx2x4 gen() 16973 MB/s Oct 27 08:23:50.212229 kernel: raid6: avx2x2 gen() 17677 MB/s Oct 27 08:23:50.212240 kernel: raid6: avx2x1 gen() 13362 MB/s Oct 27 08:23:50.212253 kernel: raid6: using algorithm avx2x2 gen() 17677 MB/s Oct 27 08:23:50.212263 kernel: raid6: .... xor() 21052 MB/s, rmw enabled Oct 27 08:23:50.212273 kernel: raid6: using avx2x2 recovery algorithm Oct 27 08:23:50.212284 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:23:50.212297 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:23:50.212306 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:23:50.212317 kernel: xor: automatically using best checksumming function avx Oct 27 08:23:50.212327 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:23:50.212337 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 27 08:23:50.212348 kernel: BTRFS: device fsid bf514789-bcec-4c15-ac9d-e4c3d19a42b2 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (157) Oct 27 08:23:50.212359 kernel: BTRFS info (device dm-0): first mount of filesystem bf514789-bcec-4c15-ac9d-e4c3d19a42b2 Oct 27 08:23:50.212372 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 27 08:23:50.212383 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 27 08:23:50.212393 kernel: BTRFS info (device dm-0): enabling free space tree Oct 27 08:23:50.212404 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:23:50.212414 kernel: loop: module loaded Oct 27 08:23:50.212424 kernel: loop0: detected capacity change from 0 to 100120 Oct 27 08:23:50.212434 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 27 08:23:50.212447 systemd[1]: Successfully made /usr/ read-only. Oct 27 08:23:50.212464 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 27 08:23:50.212476 systemd[1]: Detected virtualization kvm. Oct 27 08:23:50.212486 systemd[1]: Detected architecture x86-64. Oct 27 08:23:50.212497 systemd[1]: Running in initrd. Oct 27 08:23:50.212507 systemd[1]: No hostname configured, using default hostname. Oct 27 08:23:50.212520 systemd[1]: Hostname set to . Oct 27 08:23:50.212531 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 27 08:23:50.212541 systemd[1]: Queued start job for default target initrd.target. Oct 27 08:23:50.212552 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 27 08:23:50.212563 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 08:23:50.212574 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 08:23:50.212585 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 27 08:23:50.212599 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 27 08:23:50.212610 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 27 08:23:50.212622 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 27 08:23:50.212632 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 08:23:50.212643 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 27 08:23:50.212656 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 27 08:23:50.212667 systemd[1]: Reached target paths.target - Path Units. Oct 27 08:23:50.212678 systemd[1]: Reached target slices.target - Slice Units. Oct 27 08:23:50.212688 systemd[1]: Reached target swap.target - Swaps. Oct 27 08:23:50.212699 systemd[1]: Reached target timers.target - Timer Units. Oct 27 08:23:50.212709 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 27 08:23:50.212720 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 27 08:23:50.212733 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 27 08:23:50.212744 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 27 08:23:50.212754 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 27 08:23:50.212765 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 27 08:23:50.212775 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 08:23:50.212786 systemd[1]: Reached target sockets.target - Socket Units. Oct 27 08:23:50.212797 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 27 08:23:50.212810 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 27 08:23:50.212821 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 27 08:23:50.212831 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 27 08:23:50.212843 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 27 08:23:50.212854 systemd[1]: Starting systemd-fsck-usr.service... Oct 27 08:23:50.212864 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 27 08:23:50.212878 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 27 08:23:50.212888 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:23:50.212899 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 27 08:23:50.212913 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 08:23:50.212927 systemd[1]: Finished systemd-fsck-usr.service. Oct 27 08:23:50.212986 systemd-journald[292]: Collecting audit messages is disabled. Oct 27 08:23:50.213015 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 27 08:23:50.213028 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 27 08:23:50.213039 kernel: Bridge firewalling registered Oct 27 08:23:50.213050 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 27 08:23:50.213061 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 27 08:23:50.213072 systemd-journald[292]: Journal started Oct 27 08:23:50.213094 systemd-journald[292]: Runtime Journal (/run/log/journal/e50e1047b37e4aeb85652d2768dc5a65) is 4.9M, max 39.2M, 34.3M free. Oct 27 08:23:50.185509 systemd-modules-load[294]: Inserted module 'br_netfilter' Oct 27 08:23:50.217524 systemd[1]: Started systemd-journald.service - Journal Service. Oct 27 08:23:50.226369 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 27 08:23:50.277521 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 27 08:23:50.281513 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:23:50.282780 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 27 08:23:50.286028 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 27 08:23:50.291421 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 27 08:23:50.294538 systemd-tmpfiles[310]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 27 08:23:50.296377 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 27 08:23:50.311149 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 08:23:50.318973 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 27 08:23:50.323860 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 27 08:23:50.327162 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 08:23:50.360829 dracut-cmdline[332]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=e6ac205aca0358d0b739fe2cba6f8244850dbdc9027fd8e7442161fce065515e Oct 27 08:23:50.375033 systemd-resolved[317]: Positive Trust Anchors: Oct 27 08:23:50.375050 systemd-resolved[317]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 27 08:23:50.375054 systemd-resolved[317]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 27 08:23:50.375092 systemd-resolved[317]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 27 08:23:50.412526 systemd-resolved[317]: Defaulting to hostname 'linux'. Oct 27 08:23:50.414454 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 27 08:23:50.415711 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 27 08:23:50.487162 kernel: Loading iSCSI transport class v2.0-870. Oct 27 08:23:50.504180 kernel: iscsi: registered transport (tcp) Oct 27 08:23:50.532582 kernel: iscsi: registered transport (qla4xxx) Oct 27 08:23:50.532685 kernel: QLogic iSCSI HBA Driver Oct 27 08:23:50.566201 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 27 08:23:50.590471 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 08:23:50.591794 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 27 08:23:50.651762 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 27 08:23:50.653965 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 27 08:23:50.655568 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 27 08:23:50.696193 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 27 08:23:50.699280 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 08:23:50.731022 systemd-udevd[573]: Using default interface naming scheme 'v257'. Oct 27 08:23:50.742818 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 08:23:50.748659 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 27 08:23:50.789584 dracut-pre-trigger[637]: rd.md=0: removing MD RAID activation Oct 27 08:23:50.792751 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 27 08:23:50.796420 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 27 08:23:50.838433 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 27 08:23:50.842246 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 27 08:23:50.856296 systemd-networkd[684]: lo: Link UP Oct 27 08:23:50.856305 systemd-networkd[684]: lo: Gained carrier Oct 27 08:23:50.857604 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 27 08:23:50.858283 systemd[1]: Reached target network.target - Network. Oct 27 08:23:50.921770 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 08:23:50.923518 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 27 08:23:51.013016 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 27 08:23:51.050296 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 27 08:23:51.059075 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 27 08:23:51.069110 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 27 08:23:51.071250 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 27 08:23:51.101935 disk-uuid[739]: Primary Header is updated. Oct 27 08:23:51.101935 disk-uuid[739]: Secondary Entries is updated. Oct 27 08:23:51.101935 disk-uuid[739]: Secondary Header is updated. Oct 27 08:23:51.111156 kernel: cryptd: max_cpu_qlen set to 1000 Oct 27 08:23:51.175508 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 08:23:51.175673 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:23:51.189929 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:23:51.201608 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:23:51.202510 systemd-networkd[684]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Oct 27 08:23:51.202515 systemd-networkd[684]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Oct 27 08:23:51.213939 kernel: AES CTR mode by8 optimization enabled Oct 27 08:23:51.204793 systemd-networkd[684]: eth0: Link UP Oct 27 08:23:51.207303 systemd-networkd[684]: eth0: Gained carrier Oct 27 08:23:51.207324 systemd-networkd[684]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Oct 27 08:23:51.224281 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 27 08:23:51.225188 systemd-networkd[684]: eth0: DHCPv4 address 143.198.224.48/20, gateway 143.198.224.1 acquired from 169.254.169.253 Oct 27 08:23:51.281261 systemd-networkd[684]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 08:23:51.281272 systemd-networkd[684]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 27 08:23:51.285575 systemd-networkd[684]: eth1: Link UP Oct 27 08:23:51.288288 systemd-networkd[684]: eth1: Gained carrier Oct 27 08:23:51.288311 systemd-networkd[684]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 08:23:51.303255 systemd-networkd[684]: eth1: DHCPv4 address 10.124.0.22/20 acquired from 169.254.169.253 Oct 27 08:23:51.353477 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 27 08:23:51.384287 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:23:51.394445 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 27 08:23:51.395219 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 08:23:51.396300 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 27 08:23:51.399059 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 27 08:23:51.434539 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 27 08:23:52.204237 disk-uuid[740]: Warning: The kernel is still using the old partition table. Oct 27 08:23:52.204237 disk-uuid[740]: The new table will be used at the next reboot or after you Oct 27 08:23:52.204237 disk-uuid[740]: run partprobe(8) or kpartx(8) Oct 27 08:23:52.204237 disk-uuid[740]: The operation has completed successfully. Oct 27 08:23:52.215066 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 27 08:23:52.215290 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 27 08:23:52.218872 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 27 08:23:52.260615 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (834) Oct 27 08:23:52.260708 kernel: BTRFS info (device vda6): first mount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:23:52.264228 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 27 08:23:52.269776 kernel: BTRFS info (device vda6): turning on async discard Oct 27 08:23:52.269881 kernel: BTRFS info (device vda6): enabling free space tree Oct 27 08:23:52.279264 kernel: BTRFS info (device vda6): last unmount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:23:52.280034 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 27 08:23:52.283655 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 27 08:23:52.373414 systemd-networkd[684]: eth0: Gained IPv6LL Oct 27 08:23:52.498910 ignition[853]: Ignition 2.22.0 Oct 27 08:23:52.498938 ignition[853]: Stage: fetch-offline Oct 27 08:23:52.499043 ignition[853]: no configs at "/usr/lib/ignition/base.d" Oct 27 08:23:52.499066 ignition[853]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 27 08:23:52.502539 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 27 08:23:52.499310 ignition[853]: parsed url from cmdline: "" Oct 27 08:23:52.504787 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 27 08:23:52.499316 ignition[853]: no config URL provided Oct 27 08:23:52.499327 ignition[853]: reading system config file "/usr/lib/ignition/user.ign" Oct 27 08:23:52.499350 ignition[853]: no config at "/usr/lib/ignition/user.ign" Oct 27 08:23:52.499360 ignition[853]: failed to fetch config: resource requires networking Oct 27 08:23:52.499654 ignition[853]: Ignition finished successfully Oct 27 08:23:52.557321 ignition[862]: Ignition 2.22.0 Oct 27 08:23:52.557337 ignition[862]: Stage: fetch Oct 27 08:23:52.560572 ignition[862]: no configs at "/usr/lib/ignition/base.d" Oct 27 08:23:52.560602 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 27 08:23:52.560736 ignition[862]: parsed url from cmdline: "" Oct 27 08:23:52.560740 ignition[862]: no config URL provided Oct 27 08:23:52.560746 ignition[862]: reading system config file "/usr/lib/ignition/user.ign" Oct 27 08:23:52.560754 ignition[862]: no config at "/usr/lib/ignition/user.ign" Oct 27 08:23:52.560783 ignition[862]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Oct 27 08:23:52.574992 ignition[862]: GET result: OK Oct 27 08:23:52.575193 ignition[862]: parsing config with SHA512: 5dfab43338c562bfe339f47b3989b1244fdd20660b83ccc75d033bb2b81db58b3cfcb76621c137f9408c01228a6c53530a9897c976205b2ed0e0c556ddcb09e4 Oct 27 08:23:52.581652 unknown[862]: fetched base config from "system" Oct 27 08:23:52.581665 unknown[862]: fetched base config from "system" Oct 27 08:23:52.581986 ignition[862]: fetch: fetch complete Oct 27 08:23:52.581671 unknown[862]: fetched user config from "digitalocean" Oct 27 08:23:52.581995 ignition[862]: fetch: fetch passed Oct 27 08:23:52.585264 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 27 08:23:52.582056 ignition[862]: Ignition finished successfully Oct 27 08:23:52.587979 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 27 08:23:52.640992 ignition[868]: Ignition 2.22.0 Oct 27 08:23:52.642142 ignition[868]: Stage: kargs Oct 27 08:23:52.642436 ignition[868]: no configs at "/usr/lib/ignition/base.d" Oct 27 08:23:52.642453 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 27 08:23:52.644889 ignition[868]: kargs: kargs passed Oct 27 08:23:52.644970 ignition[868]: Ignition finished successfully Oct 27 08:23:52.647207 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 27 08:23:52.653328 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 27 08:23:52.689498 ignition[874]: Ignition 2.22.0 Oct 27 08:23:52.690365 ignition[874]: Stage: disks Oct 27 08:23:52.690545 ignition[874]: no configs at "/usr/lib/ignition/base.d" Oct 27 08:23:52.690555 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 27 08:23:52.691702 ignition[874]: disks: disks passed Oct 27 08:23:52.691754 ignition[874]: Ignition finished successfully Oct 27 08:23:52.694226 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 27 08:23:52.695171 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 27 08:23:52.695786 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 27 08:23:52.696785 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 27 08:23:52.697859 systemd[1]: Reached target sysinit.target - System Initialization. Oct 27 08:23:52.698802 systemd[1]: Reached target basic.target - Basic System. Oct 27 08:23:52.701246 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 27 08:23:52.742405 systemd-fsck[883]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 27 08:23:52.746468 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 27 08:23:52.748304 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 27 08:23:52.878162 kernel: EXT4-fs (vda9): mounted filesystem e90e2fe3-e1db-4bff-abac-c8d1d032f674 r/w with ordered data mode. Quota mode: none. Oct 27 08:23:52.879360 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 27 08:23:52.880606 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 27 08:23:52.883088 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 27 08:23:52.883442 systemd-networkd[684]: eth1: Gained IPv6LL Oct 27 08:23:52.885841 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 27 08:23:52.899020 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Oct 27 08:23:52.903282 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 27 08:23:52.904695 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 27 08:23:52.904741 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 27 08:23:52.909140 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (892) Oct 27 08:23:52.910560 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 27 08:23:52.915840 kernel: BTRFS info (device vda6): first mount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:23:52.915918 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 27 08:23:52.917617 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 27 08:23:52.925517 kernel: BTRFS info (device vda6): turning on async discard Oct 27 08:23:52.925600 kernel: BTRFS info (device vda6): enabling free space tree Oct 27 08:23:52.931604 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 27 08:23:53.016322 initrd-setup-root[922]: cut: /sysroot/etc/passwd: No such file or directory Oct 27 08:23:53.026564 coreos-metadata[895]: Oct 27 08:23:53.025 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 27 08:23:53.029087 coreos-metadata[894]: Oct 27 08:23:53.027 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 27 08:23:53.032196 initrd-setup-root[929]: cut: /sysroot/etc/group: No such file or directory Oct 27 08:23:53.039083 initrd-setup-root[936]: cut: /sysroot/etc/shadow: No such file or directory Oct 27 08:23:53.040375 coreos-metadata[895]: Oct 27 08:23:53.040 INFO Fetch successful Oct 27 08:23:53.042159 coreos-metadata[894]: Oct 27 08:23:53.041 INFO Fetch successful Oct 27 08:23:53.050144 coreos-metadata[895]: Oct 27 08:23:53.049 INFO wrote hostname ci-9999.9.9-k-8ed45c9b51 to /sysroot/etc/hostname Oct 27 08:23:53.051366 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 27 08:23:53.053248 initrd-setup-root[943]: cut: /sysroot/etc/gshadow: No such file or directory Oct 27 08:23:53.056219 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Oct 27 08:23:53.056358 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Oct 27 08:23:53.185699 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 27 08:23:53.188681 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 27 08:23:53.190563 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 27 08:23:53.213198 kernel: BTRFS info (device vda6): last unmount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:23:53.234015 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 27 08:23:53.247094 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 27 08:23:53.255523 ignition[1014]: INFO : Ignition 2.22.0 Oct 27 08:23:53.256442 ignition[1014]: INFO : Stage: mount Oct 27 08:23:53.257182 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 08:23:53.257182 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 27 08:23:53.259528 ignition[1014]: INFO : mount: mount passed Oct 27 08:23:53.259528 ignition[1014]: INFO : Ignition finished successfully Oct 27 08:23:53.262251 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 27 08:23:53.264309 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 27 08:23:53.286217 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 27 08:23:53.308183 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1024) Oct 27 08:23:53.311220 kernel: BTRFS info (device vda6): first mount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:23:53.311282 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 27 08:23:53.316946 kernel: BTRFS info (device vda6): turning on async discard Oct 27 08:23:53.317019 kernel: BTRFS info (device vda6): enabling free space tree Oct 27 08:23:53.319307 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 27 08:23:53.360417 ignition[1041]: INFO : Ignition 2.22.0 Oct 27 08:23:53.360417 ignition[1041]: INFO : Stage: files Oct 27 08:23:53.360417 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 08:23:53.360417 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 27 08:23:53.364631 ignition[1041]: DEBUG : files: compiled without relabeling support, skipping Oct 27 08:23:53.366385 ignition[1041]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 27 08:23:53.366385 ignition[1041]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 27 08:23:53.371533 ignition[1041]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 27 08:23:53.372294 ignition[1041]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 27 08:23:53.372294 ignition[1041]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 27 08:23:53.372051 unknown[1041]: wrote ssh authorized keys file for user: core Oct 27 08:23:53.374614 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 27 08:23:53.374614 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 27 08:23:53.405492 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 27 08:23:53.451378 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 27 08:23:53.451378 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 27 08:23:53.453052 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 27 08:23:53.453052 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 27 08:23:53.453052 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 27 08:23:53.453052 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 27 08:23:53.453052 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 27 08:23:53.453052 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 27 08:23:53.453052 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 27 08:23:53.459304 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 27 08:23:53.459304 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 27 08:23:53.459304 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 27 08:23:53.459304 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 27 08:23:53.459304 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 27 08:23:53.459304 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Oct 27 08:23:53.869282 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 27 08:23:54.291229 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 27 08:23:54.291229 ignition[1041]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 27 08:23:54.293527 ignition[1041]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 27 08:23:54.293527 ignition[1041]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 27 08:23:54.293527 ignition[1041]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 27 08:23:54.293527 ignition[1041]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 27 08:23:54.293527 ignition[1041]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 27 08:23:54.298709 ignition[1041]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 27 08:23:54.298709 ignition[1041]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 27 08:23:54.298709 ignition[1041]: INFO : files: files passed Oct 27 08:23:54.298709 ignition[1041]: INFO : Ignition finished successfully Oct 27 08:23:54.296677 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 27 08:23:54.300363 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 27 08:23:54.303316 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 27 08:23:54.323340 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 27 08:23:54.323479 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 27 08:23:54.331777 initrd-setup-root-after-ignition[1073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 27 08:23:54.331777 initrd-setup-root-after-ignition[1073]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 27 08:23:54.334468 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 27 08:23:54.336235 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 27 08:23:54.337603 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 27 08:23:54.339341 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 27 08:23:54.399456 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 27 08:23:54.399601 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 27 08:23:54.400868 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 27 08:23:54.401608 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 27 08:23:54.402761 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 27 08:23:54.404017 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 27 08:23:54.450952 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 27 08:23:54.453822 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 27 08:23:54.479310 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 27 08:23:54.479523 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 27 08:23:54.481843 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 08:23:54.482674 systemd[1]: Stopped target timers.target - Timer Units. Oct 27 08:23:54.483607 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 27 08:23:54.483785 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 27 08:23:54.484988 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 27 08:23:54.485993 systemd[1]: Stopped target basic.target - Basic System. Oct 27 08:23:54.486971 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 27 08:23:54.487791 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 27 08:23:54.488875 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 27 08:23:54.489776 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 27 08:23:54.490852 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 27 08:23:54.491728 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 27 08:23:54.492771 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 27 08:23:54.493697 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 27 08:23:54.494631 systemd[1]: Stopped target swap.target - Swaps. Oct 27 08:23:54.495461 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 27 08:23:54.495629 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 27 08:23:54.496795 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 27 08:23:54.497979 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 08:23:54.498803 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 27 08:23:54.499029 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 08:23:54.499805 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 27 08:23:54.500005 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 27 08:23:54.501196 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 27 08:23:54.501471 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 27 08:23:54.502391 systemd[1]: ignition-files.service: Deactivated successfully. Oct 27 08:23:54.502546 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 27 08:23:54.503304 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 27 08:23:54.503451 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 27 08:23:54.506226 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 27 08:23:54.506813 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 27 08:23:54.506988 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 08:23:54.511433 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 27 08:23:54.512561 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 27 08:23:54.513344 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 08:23:54.514717 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 27 08:23:54.515494 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 08:23:54.516732 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 27 08:23:54.521494 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 27 08:23:54.527729 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 27 08:23:54.528484 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 27 08:23:54.555599 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 27 08:23:54.563996 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 27 08:23:54.565431 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 27 08:23:54.566695 ignition[1097]: INFO : Ignition 2.22.0 Oct 27 08:23:54.568506 ignition[1097]: INFO : Stage: umount Oct 27 08:23:54.568506 ignition[1097]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 08:23:54.568506 ignition[1097]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 27 08:23:54.572209 ignition[1097]: INFO : umount: umount passed Oct 27 08:23:54.572924 ignition[1097]: INFO : Ignition finished successfully Oct 27 08:23:54.575407 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 27 08:23:54.575607 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 27 08:23:54.576869 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 27 08:23:54.576951 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 27 08:23:54.578045 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 27 08:23:54.578254 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 27 08:23:54.578959 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 27 08:23:54.579042 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 27 08:23:54.579850 systemd[1]: Stopped target network.target - Network. Oct 27 08:23:54.580835 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 27 08:23:54.580931 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 27 08:23:54.581843 systemd[1]: Stopped target paths.target - Path Units. Oct 27 08:23:54.582669 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 27 08:23:54.586290 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 08:23:54.586899 systemd[1]: Stopped target slices.target - Slice Units. Oct 27 08:23:54.588132 systemd[1]: Stopped target sockets.target - Socket Units. Oct 27 08:23:54.589195 systemd[1]: iscsid.socket: Deactivated successfully. Oct 27 08:23:54.589283 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 27 08:23:54.589946 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 27 08:23:54.589996 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 27 08:23:54.590813 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 27 08:23:54.590899 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 27 08:23:54.591643 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 27 08:23:54.591698 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 27 08:23:54.592478 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 27 08:23:54.592552 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 27 08:23:54.593591 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 27 08:23:54.594733 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 27 08:23:54.605137 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 27 08:23:54.605278 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 27 08:23:54.609408 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 27 08:23:54.609564 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 27 08:23:54.613194 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 27 08:23:54.613768 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 27 08:23:54.613820 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 27 08:23:54.615875 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 27 08:23:54.617346 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 27 08:23:54.617434 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 27 08:23:54.620572 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 27 08:23:54.620668 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 27 08:23:54.621394 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 27 08:23:54.621447 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 27 08:23:54.622494 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 08:23:54.637917 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 27 08:23:54.638889 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 08:23:54.640531 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 27 08:23:54.640622 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 27 08:23:54.642826 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 27 08:23:54.642892 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 08:23:54.643400 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 27 08:23:54.643474 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 27 08:23:54.647434 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 27 08:23:54.648042 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 27 08:23:54.649311 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 27 08:23:54.649993 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 27 08:23:54.652677 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 27 08:23:54.653185 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 27 08:23:54.653261 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 08:23:54.654330 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 27 08:23:54.654385 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 08:23:54.654876 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 08:23:54.654922 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:23:54.672897 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 27 08:23:54.673046 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 27 08:23:54.680292 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 27 08:23:54.680513 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 27 08:23:54.681812 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 27 08:23:54.683843 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 27 08:23:54.706690 systemd[1]: Switching root. Oct 27 08:23:54.742271 systemd-journald[292]: Journal stopped Oct 27 08:23:55.919437 systemd-journald[292]: Received SIGTERM from PID 1 (systemd). Oct 27 08:23:55.919546 kernel: SELinux: policy capability network_peer_controls=1 Oct 27 08:23:55.919564 kernel: SELinux: policy capability open_perms=1 Oct 27 08:23:55.919581 kernel: SELinux: policy capability extended_socket_class=1 Oct 27 08:23:55.919600 kernel: SELinux: policy capability always_check_network=0 Oct 27 08:23:55.919616 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 27 08:23:55.919629 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 27 08:23:55.919642 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 27 08:23:55.919663 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 27 08:23:55.919677 kernel: SELinux: policy capability userspace_initial_context=0 Oct 27 08:23:55.919690 kernel: audit: type=1403 audit(1761553434.928:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 27 08:23:55.919704 systemd[1]: Successfully loaded SELinux policy in 70.800ms. Oct 27 08:23:55.919724 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.981ms. Oct 27 08:23:55.919738 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 27 08:23:55.919753 systemd[1]: Detected virtualization kvm. Oct 27 08:23:55.919774 systemd[1]: Detected architecture x86-64. Oct 27 08:23:55.919789 systemd[1]: Detected first boot. Oct 27 08:23:55.919802 systemd[1]: Hostname set to . Oct 27 08:23:55.919820 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 27 08:23:55.919834 zram_generator::config[1142]: No configuration found. Oct 27 08:23:55.919849 kernel: Guest personality initialized and is inactive Oct 27 08:23:55.919868 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 27 08:23:55.919880 kernel: Initialized host personality Oct 27 08:23:55.919894 kernel: NET: Registered PF_VSOCK protocol family Oct 27 08:23:55.919906 systemd[1]: Populated /etc with preset unit settings. Oct 27 08:23:55.919920 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 27 08:23:55.919933 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 27 08:23:55.919948 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 27 08:23:55.919969 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 27 08:23:55.919983 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 27 08:23:55.919996 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 27 08:23:55.920009 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 27 08:23:55.920023 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 27 08:23:55.920043 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 27 08:23:55.920057 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 27 08:23:55.920076 systemd[1]: Created slice user.slice - User and Session Slice. Oct 27 08:23:55.920090 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 08:23:55.920104 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 08:23:55.921173 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 27 08:23:55.921205 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 27 08:23:55.921227 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 27 08:23:55.921255 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 27 08:23:55.921269 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 27 08:23:55.921283 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 08:23:55.921296 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 27 08:23:55.921309 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 27 08:23:55.921322 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 27 08:23:55.921341 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 27 08:23:55.921355 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 27 08:23:55.921370 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 08:23:55.921383 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 27 08:23:55.921395 systemd[1]: Reached target slices.target - Slice Units. Oct 27 08:23:55.921408 systemd[1]: Reached target swap.target - Swaps. Oct 27 08:23:55.921421 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 27 08:23:55.921441 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 27 08:23:55.921456 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 27 08:23:55.921470 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 27 08:23:55.921483 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 27 08:23:55.921496 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 08:23:55.921508 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 27 08:23:55.921522 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 27 08:23:55.921536 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 27 08:23:55.921554 systemd[1]: Mounting media.mount - External Media Directory... Oct 27 08:23:55.921568 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:23:55.921581 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 27 08:23:55.921594 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 27 08:23:55.921607 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 27 08:23:55.921620 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 27 08:23:55.921640 systemd[1]: Reached target machines.target - Containers. Oct 27 08:23:55.921653 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 27 08:23:55.921667 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 08:23:55.921681 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 27 08:23:55.921694 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 27 08:23:55.921707 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 08:23:55.921720 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 27 08:23:55.921739 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 08:23:55.921752 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 27 08:23:55.921764 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 08:23:55.921779 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 27 08:23:55.921792 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 27 08:23:55.921806 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 27 08:23:55.921820 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 27 08:23:55.921838 systemd[1]: Stopped systemd-fsck-usr.service. Oct 27 08:23:55.921853 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 08:23:55.921866 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 27 08:23:55.921878 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 27 08:23:55.921891 kernel: fuse: init (API version 7.41) Oct 27 08:23:55.921905 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 27 08:23:55.921925 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 27 08:23:55.921940 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 27 08:23:55.921954 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 27 08:23:55.921968 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:23:55.921987 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 27 08:23:55.922000 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 27 08:23:55.922013 systemd[1]: Mounted media.mount - External Media Directory. Oct 27 08:23:55.922026 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 27 08:23:55.922038 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 27 08:23:55.922067 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 27 08:23:55.922086 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 08:23:55.922104 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 08:23:55.922598 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 08:23:55.922656 systemd-journald[1215]: Collecting audit messages is disabled. Oct 27 08:23:55.922697 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 08:23:55.922718 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 08:23:55.922731 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 27 08:23:55.922747 systemd-journald[1215]: Journal started Oct 27 08:23:55.922771 systemd-journald[1215]: Runtime Journal (/run/log/journal/e50e1047b37e4aeb85652d2768dc5a65) is 4.9M, max 39.2M, 34.3M free. Oct 27 08:23:55.931095 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 27 08:23:55.595822 systemd[1]: Queued start job for default target multi-user.target. Oct 27 08:23:55.621407 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 27 08:23:55.622024 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 27 08:23:55.938155 systemd[1]: Started systemd-journald.service - Journal Service. Oct 27 08:23:55.936811 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 27 08:23:55.941062 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 27 08:23:55.943300 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 08:23:55.943539 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 08:23:55.945194 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 27 08:23:55.946756 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 27 08:23:55.963471 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 27 08:23:55.966450 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 27 08:23:55.972437 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 27 08:23:55.974243 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 27 08:23:55.974301 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 27 08:23:55.979001 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 27 08:23:55.981108 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 08:23:55.991930 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 27 08:23:55.998440 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 27 08:23:55.999220 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 27 08:23:56.001456 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 27 08:23:56.002265 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 27 08:23:56.009485 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 27 08:23:56.013392 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 27 08:23:56.020421 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 27 08:23:56.025234 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 08:23:56.038581 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 27 08:23:56.041004 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 27 08:23:56.042486 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 27 08:23:56.050288 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 27 08:23:56.056576 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 27 08:23:56.062639 systemd-journald[1215]: Time spent on flushing to /var/log/journal/e50e1047b37e4aeb85652d2768dc5a65 is 72.986ms for 998 entries. Oct 27 08:23:56.062639 systemd-journald[1215]: System Journal (/var/log/journal/e50e1047b37e4aeb85652d2768dc5a65) is 8M, max 163.5M, 155.5M free. Oct 27 08:23:56.155302 systemd-journald[1215]: Received client request to flush runtime journal. Oct 27 08:23:56.155381 kernel: ACPI: bus type drm_connector registered Oct 27 08:23:56.155414 kernel: loop1: detected capacity change from 0 to 110984 Oct 27 08:23:56.155435 kernel: loop2: detected capacity change from 0 to 229808 Oct 27 08:23:56.072533 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 27 08:23:56.073285 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 27 08:23:56.086521 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 27 08:23:56.087763 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 27 08:23:56.092505 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 27 08:23:56.138491 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 27 08:23:56.158242 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 27 08:23:56.159298 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 27 08:23:56.166737 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 27 08:23:56.173172 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 27 08:23:56.179396 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 27 08:23:56.182175 kernel: loop3: detected capacity change from 0 to 8 Oct 27 08:23:56.208128 kernel: loop4: detected capacity change from 0 to 128048 Oct 27 08:23:56.213426 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 27 08:23:56.237671 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 08:23:56.247189 kernel: loop5: detected capacity change from 0 to 110984 Oct 27 08:23:56.254396 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Oct 27 08:23:56.254777 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Oct 27 08:23:56.270355 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 08:23:56.274181 kernel: loop6: detected capacity change from 0 to 229808 Oct 27 08:23:56.300167 kernel: loop7: detected capacity change from 0 to 8 Oct 27 08:23:56.305284 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 27 08:23:56.311156 kernel: loop1: detected capacity change from 0 to 128048 Oct 27 08:23:56.326557 (sd-merge)[1290]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-digitalocean.raw'. Oct 27 08:23:56.334959 (sd-merge)[1290]: Merged extensions into '/usr'. Oct 27 08:23:56.343561 systemd[1]: Reload requested from client PID 1262 ('systemd-sysext') (unit systemd-sysext.service)... Oct 27 08:23:56.343582 systemd[1]: Reloading... Oct 27 08:23:56.433913 systemd-resolved[1283]: Positive Trust Anchors: Oct 27 08:23:56.438175 systemd-resolved[1283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 27 08:23:56.438270 systemd-resolved[1283]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 27 08:23:56.439308 systemd-resolved[1283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 27 08:23:56.467622 systemd-resolved[1283]: Using system hostname 'ci-9999.9.9-k-8ed45c9b51'. Oct 27 08:23:56.499244 zram_generator::config[1328]: No configuration found. Oct 27 08:23:56.729942 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 27 08:23:56.730129 systemd[1]: Reloading finished in 386 ms. Oct 27 08:23:56.768144 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 27 08:23:56.770059 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 27 08:23:56.772918 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 27 08:23:56.782388 systemd[1]: Starting ensure-sysext.service... Oct 27 08:23:56.785550 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 27 08:23:56.813447 systemd[1]: Reload requested from client PID 1367 ('systemctl') (unit ensure-sysext.service)... Oct 27 08:23:56.813615 systemd[1]: Reloading... Oct 27 08:23:56.855314 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 27 08:23:56.855350 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 27 08:23:56.855694 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 27 08:23:56.856018 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 27 08:23:56.858101 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 27 08:23:56.859478 systemd-tmpfiles[1368]: ACLs are not supported, ignoring. Oct 27 08:23:56.859543 systemd-tmpfiles[1368]: ACLs are not supported, ignoring. Oct 27 08:23:56.873962 systemd-tmpfiles[1368]: Detected autofs mount point /boot during canonicalization of boot. Oct 27 08:23:56.873986 systemd-tmpfiles[1368]: Skipping /boot Oct 27 08:23:56.903550 systemd-tmpfiles[1368]: Detected autofs mount point /boot during canonicalization of boot. Oct 27 08:23:56.905332 systemd-tmpfiles[1368]: Skipping /boot Oct 27 08:23:56.952273 zram_generator::config[1400]: No configuration found. Oct 27 08:23:57.175399 systemd[1]: Reloading finished in 361 ms. Oct 27 08:23:57.189455 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 27 08:23:57.206211 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 08:23:57.217470 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 27 08:23:57.219569 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 27 08:23:57.223444 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 27 08:23:57.226536 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 27 08:23:57.232899 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 08:23:57.239555 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 27 08:23:57.243866 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:23:57.244090 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 08:23:57.249182 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 08:23:57.253959 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 08:23:57.261260 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 08:23:57.262399 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 08:23:57.262770 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 08:23:57.263355 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:23:57.273095 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:23:57.275030 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 08:23:57.275350 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 08:23:57.275498 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 08:23:57.275644 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:23:57.289865 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:23:57.291026 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 08:23:57.299452 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 27 08:23:57.301427 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 08:23:57.301642 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 08:23:57.301841 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:23:57.329253 systemd[1]: Finished ensure-sysext.service. Oct 27 08:23:57.340767 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 27 08:23:57.370632 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 27 08:23:57.406492 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 08:23:57.407354 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 08:23:57.417256 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 27 08:23:57.418869 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 08:23:57.422167 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 08:23:57.425297 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 27 08:23:57.427074 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 27 08:23:57.434708 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 27 08:23:57.434946 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 27 08:23:57.440868 systemd-udevd[1447]: Using default interface naming scheme 'v257'. Oct 27 08:23:57.445656 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 08:23:57.445891 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 08:23:57.450432 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 27 08:23:57.474140 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 27 08:23:57.499220 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 08:23:57.507331 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 27 08:23:57.531381 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 27 08:23:57.534459 systemd[1]: Reached target time-set.target - System Time Set. Oct 27 08:23:57.576782 augenrules[1502]: No rules Oct 27 08:23:57.579241 systemd[1]: audit-rules.service: Deactivated successfully. Oct 27 08:23:57.579697 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 27 08:23:57.677506 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Oct 27 08:23:57.684084 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Oct 27 08:23:57.685645 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:23:57.685815 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 08:23:57.688711 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 08:23:57.693667 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 08:23:57.696733 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 08:23:57.704017 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 08:23:57.704065 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 08:23:57.704104 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 27 08:23:57.704136 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:23:57.730660 systemd-networkd[1482]: lo: Link UP Oct 27 08:23:57.730675 systemd-networkd[1482]: lo: Gained carrier Oct 27 08:23:57.738720 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 27 08:23:57.739374 systemd[1]: Reached target network.target - Network. Oct 27 08:23:57.743457 systemd-networkd[1482]: eth0: Configuring with /run/systemd/network/10-52:a0:d0:ea:18:f9.network. Oct 27 08:23:57.743823 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 27 08:23:57.748433 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 27 08:23:57.751438 systemd-networkd[1482]: eth0: Link UP Oct 27 08:23:57.753772 systemd-networkd[1482]: eth0: Gained carrier Oct 27 08:23:57.768216 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 27 08:23:57.775481 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Oct 27 08:23:57.792722 kernel: ISO 9660 Extensions: RRIP_1991A Oct 27 08:23:57.806546 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Oct 27 08:23:57.819594 systemd-networkd[1482]: eth1: Configuring with /run/systemd/network/10-7e:65:9a:dc:64:35.network. Oct 27 08:23:57.829240 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Oct 27 08:23:57.831312 systemd-networkd[1482]: eth1: Link UP Oct 27 08:23:57.834927 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 27 08:23:57.835062 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Oct 27 08:23:57.835837 systemd-networkd[1482]: eth1: Gained carrier Oct 27 08:23:57.844730 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 27 08:23:57.845686 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Oct 27 08:23:57.851788 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 08:23:57.852279 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 08:23:57.854644 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 27 08:23:57.856624 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 08:23:57.863264 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 08:23:57.873438 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 08:23:57.883300 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 08:23:57.884738 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 27 08:23:57.895778 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 27 08:23:57.943167 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 27 08:23:57.976023 kernel: mousedev: PS/2 mouse device common for all mice Oct 27 08:23:57.979712 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 27 08:23:57.987859 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 27 08:23:57.999662 kernel: ACPI: button: Power Button [PWRF] Oct 27 08:23:57.999770 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 27 08:23:58.140802 ldconfig[1445]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 27 08:23:58.150236 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 27 08:23:58.154526 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 27 08:23:58.196608 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 27 08:23:58.202060 systemd[1]: Reached target sysinit.target - System Initialization. Oct 27 08:23:58.203703 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 27 08:23:58.205276 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 27 08:23:58.206210 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 27 08:23:58.207391 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 27 08:23:58.208376 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 27 08:23:58.209676 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 27 08:23:58.210830 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 27 08:23:58.210881 systemd[1]: Reached target paths.target - Path Units. Oct 27 08:23:58.213212 systemd[1]: Reached target timers.target - Timer Units. Oct 27 08:23:58.214932 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 27 08:23:58.220989 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 27 08:23:58.228978 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 27 08:23:58.231393 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 27 08:23:58.232007 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 27 08:23:58.241603 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 27 08:23:58.244184 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Oct 27 08:23:58.243729 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 27 08:23:58.245618 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 27 08:23:58.260384 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Oct 27 08:23:58.260345 systemd[1]: Reached target sockets.target - Socket Units. Oct 27 08:23:58.283152 kernel: Console: switching to colour dummy device 80x25 Oct 27 08:23:58.283508 systemd[1]: Reached target basic.target - Basic System. Oct 27 08:23:58.283683 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 27 08:23:58.283803 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 27 08:23:58.287341 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 27 08:23:58.287414 kernel: [drm] features: -context_init Oct 27 08:23:58.287741 systemd[1]: Starting containerd.service - containerd container runtime... Oct 27 08:23:58.291690 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 27 08:23:58.293353 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 27 08:23:58.299415 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 27 08:23:58.320704 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 27 08:23:58.326254 kernel: [drm] number of scanouts: 1 Oct 27 08:23:58.326353 kernel: [drm] number of cap sets: 0 Oct 27 08:23:58.363648 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Oct 27 08:23:58.363853 coreos-metadata[1557]: Oct 27 08:23:58.362 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 27 08:23:58.370762 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 27 08:23:58.370894 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 27 08:23:58.372566 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 27 08:23:58.374341 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 27 08:23:58.376367 jq[1560]: false Oct 27 08:23:58.377281 coreos-metadata[1557]: Oct 27 08:23:58.377 INFO Fetch successful Oct 27 08:23:58.380061 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Oct 27 08:23:58.380194 kernel: Console: switching to colour frame buffer device 128x48 Oct 27 08:23:58.379659 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 27 08:23:58.391178 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 27 08:23:58.403688 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 27 08:23:58.411712 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 27 08:23:58.430978 extend-filesystems[1563]: Found /dev/vda6 Oct 27 08:23:58.434311 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 27 08:23:58.438624 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:23:58.440575 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 27 08:23:58.441395 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 27 08:23:58.443620 extend-filesystems[1563]: Found /dev/vda9 Oct 27 08:23:58.450737 systemd[1]: Starting update-engine.service - Update Engine... Oct 27 08:23:58.451455 extend-filesystems[1563]: Checking size of /dev/vda9 Oct 27 08:23:58.456656 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Refreshing passwd entry cache Oct 27 08:23:58.456668 oslogin_cache_refresh[1564]: Refreshing passwd entry cache Oct 27 08:23:58.461624 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 27 08:23:58.478162 extend-filesystems[1563]: Resized partition /dev/vda9 Oct 27 08:23:58.479350 oslogin_cache_refresh[1564]: Failure getting users, quitting Oct 27 08:23:58.480828 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Failure getting users, quitting Oct 27 08:23:58.480828 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 27 08:23:58.480828 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Refreshing group entry cache Oct 27 08:23:58.479378 oslogin_cache_refresh[1564]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 27 08:23:58.479447 oslogin_cache_refresh[1564]: Refreshing group entry cache Oct 27 08:23:58.488220 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 27 08:23:58.490735 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 27 08:23:58.494368 oslogin_cache_refresh[1564]: Failure getting groups, quitting Oct 27 08:23:58.495366 extend-filesystems[1587]: resize2fs 1.47.3 (8-Jul-2025) Oct 27 08:23:58.495798 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Failure getting groups, quitting Oct 27 08:23:58.495798 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 27 08:23:58.490983 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 27 08:23:58.494392 oslogin_cache_refresh[1564]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 27 08:23:58.496150 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 27 08:23:58.496417 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 27 08:23:58.503160 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 14138363 blocks Oct 27 08:23:58.512641 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 27 08:23:58.514283 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 27 08:23:58.569318 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 08:23:58.570338 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:23:58.578919 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:23:58.596309 jq[1578]: true Oct 27 08:23:58.620707 systemd[1]: motdgen.service: Deactivated successfully. Oct 27 08:23:58.620979 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 27 08:23:58.630063 dbus-daemon[1558]: [system] SELinux support is enabled Oct 27 08:23:58.630747 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 27 08:23:58.634770 update_engine[1575]: I20251027 08:23:58.632968 1575 main.cc:92] Flatcar Update Engine starting Oct 27 08:23:58.641155 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Oct 27 08:23:58.635725 (ntainerd)[1614]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 27 08:23:58.636535 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 27 08:23:58.636576 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 27 08:23:58.669411 extend-filesystems[1587]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 27 08:23:58.669411 extend-filesystems[1587]: old_desc_blocks = 1, new_desc_blocks = 7 Oct 27 08:23:58.669411 extend-filesystems[1587]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Oct 27 08:23:58.637622 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 27 08:23:58.700756 extend-filesystems[1563]: Resized filesystem in /dev/vda9 Oct 27 08:23:58.637699 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Oct 27 08:23:58.717176 tar[1588]: linux-amd64/LICENSE Oct 27 08:23:58.717176 tar[1588]: linux-amd64/helm Oct 27 08:23:58.717518 update_engine[1575]: I20251027 08:23:58.702986 1575 update_check_scheduler.cc:74] Next update check in 3m14s Oct 27 08:23:58.637716 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 27 08:23:58.665807 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 27 08:23:58.676402 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 27 08:23:58.688975 systemd[1]: Started update-engine.service - Update Engine. Oct 27 08:23:58.735155 jq[1615]: true Oct 27 08:23:58.740094 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 27 08:23:58.742590 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 08:23:58.742824 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:23:58.761457 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:23:58.778262 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 27 08:23:58.781992 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 27 08:23:58.847460 kernel: EDAC MC: Ver: 3.0.0 Oct 27 08:23:58.960844 systemd-logind[1572]: New seat seat0. Oct 27 08:23:58.970730 systemd-logind[1572]: Watching system buttons on /dev/input/event2 (Power Button) Oct 27 08:23:58.975801 systemd-logind[1572]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 27 08:23:58.976329 systemd[1]: Started systemd-logind.service - User Login Management. Oct 27 08:23:59.032898 bash[1657]: Updated "/home/core/.ssh/authorized_keys" Oct 27 08:23:59.037438 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 27 08:23:59.046274 systemd[1]: Starting sshkeys.service... Oct 27 08:23:59.060163 locksmithd[1623]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 27 08:23:59.073828 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:23:59.101648 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 27 08:23:59.106492 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 27 08:23:59.225274 coreos-metadata[1662]: Oct 27 08:23:59.224 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 27 08:23:59.240156 coreos-metadata[1662]: Oct 27 08:23:59.240 INFO Fetch successful Oct 27 08:23:59.261223 unknown[1662]: wrote ssh authorized keys file for user: core Oct 27 08:23:59.268270 containerd[1614]: time="2025-10-27T08:23:59Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 27 08:23:59.278685 containerd[1614]: time="2025-10-27T08:23:59.278619770Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 27 08:23:59.337146 containerd[1614]: time="2025-10-27T08:23:59.335045828Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.942µs" Oct 27 08:23:59.337146 containerd[1614]: time="2025-10-27T08:23:59.335106095Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 27 08:23:59.337146 containerd[1614]: time="2025-10-27T08:23:59.335146555Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 27 08:23:59.337146 containerd[1614]: time="2025-10-27T08:23:59.335345385Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 27 08:23:59.337146 containerd[1614]: time="2025-10-27T08:23:59.335360482Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 27 08:23:59.337146 containerd[1614]: time="2025-10-27T08:23:59.335388363Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 27 08:23:59.337146 containerd[1614]: time="2025-10-27T08:23:59.335445409Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 27 08:23:59.337146 containerd[1614]: time="2025-10-27T08:23:59.335456971Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 27 08:23:59.337146 containerd[1614]: time="2025-10-27T08:23:59.335844402Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 27 08:23:59.337146 containerd[1614]: time="2025-10-27T08:23:59.335875655Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 27 08:23:59.337146 containerd[1614]: time="2025-10-27T08:23:59.335893623Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 27 08:23:59.337146 containerd[1614]: time="2025-10-27T08:23:59.335907337Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 27 08:23:59.337514 containerd[1614]: time="2025-10-27T08:23:59.336072997Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 27 08:23:59.341534 update-ssh-keys[1667]: Updated "/home/core/.ssh/authorized_keys" Oct 27 08:23:59.343691 containerd[1614]: time="2025-10-27T08:23:59.343656949Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 27 08:23:59.343801 containerd[1614]: time="2025-10-27T08:23:59.343785171Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 27 08:23:59.344932 containerd[1614]: time="2025-10-27T08:23:59.344622259Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 27 08:23:59.344759 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 27 08:23:59.350517 systemd[1]: Finished sshkeys.service. Oct 27 08:23:59.356410 containerd[1614]: time="2025-10-27T08:23:59.355149178Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 27 08:23:59.357589 containerd[1614]: time="2025-10-27T08:23:59.357476842Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 27 08:23:59.357810 containerd[1614]: time="2025-10-27T08:23:59.357794735Z" level=info msg="metadata content store policy set" policy=shared Oct 27 08:23:59.363147 containerd[1614]: time="2025-10-27T08:23:59.362276543Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 27 08:23:59.363147 containerd[1614]: time="2025-10-27T08:23:59.362341906Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 27 08:23:59.363147 containerd[1614]: time="2025-10-27T08:23:59.362358659Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 27 08:23:59.363147 containerd[1614]: time="2025-10-27T08:23:59.362372772Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 27 08:23:59.363147 containerd[1614]: time="2025-10-27T08:23:59.362386649Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 27 08:23:59.363147 containerd[1614]: time="2025-10-27T08:23:59.362399318Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 27 08:23:59.363147 containerd[1614]: time="2025-10-27T08:23:59.362428161Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 27 08:23:59.363147 containerd[1614]: time="2025-10-27T08:23:59.362443316Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 27 08:23:59.363147 containerd[1614]: time="2025-10-27T08:23:59.362455294Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 27 08:23:59.363147 containerd[1614]: time="2025-10-27T08:23:59.362465085Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 27 08:23:59.363147 containerd[1614]: time="2025-10-27T08:23:59.362474916Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 27 08:23:59.363147 containerd[1614]: time="2025-10-27T08:23:59.362487466Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 27 08:23:59.363147 containerd[1614]: time="2025-10-27T08:23:59.362644608Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 27 08:23:59.363147 containerd[1614]: time="2025-10-27T08:23:59.362685890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 27 08:23:59.363537 containerd[1614]: time="2025-10-27T08:23:59.362710772Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 27 08:23:59.363537 containerd[1614]: time="2025-10-27T08:23:59.362726709Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 27 08:23:59.363537 containerd[1614]: time="2025-10-27T08:23:59.362737793Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 27 08:23:59.363537 containerd[1614]: time="2025-10-27T08:23:59.362758634Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 27 08:23:59.363537 containerd[1614]: time="2025-10-27T08:23:59.362779998Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 27 08:23:59.363537 containerd[1614]: time="2025-10-27T08:23:59.362792935Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 27 08:23:59.363537 containerd[1614]: time="2025-10-27T08:23:59.362806272Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 27 08:23:59.363537 containerd[1614]: time="2025-10-27T08:23:59.362817188Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 27 08:23:59.363537 containerd[1614]: time="2025-10-27T08:23:59.362831242Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 27 08:23:59.363537 containerd[1614]: time="2025-10-27T08:23:59.362912435Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 27 08:23:59.363537 containerd[1614]: time="2025-10-27T08:23:59.362928546Z" level=info msg="Start snapshots syncer" Oct 27 08:23:59.363537 containerd[1614]: time="2025-10-27T08:23:59.362955323Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 27 08:23:59.365451 containerd[1614]: time="2025-10-27T08:23:59.364285731Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 27 08:23:59.365451 containerd[1614]: time="2025-10-27T08:23:59.365233130Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 27 08:23:59.365647 containerd[1614]: time="2025-10-27T08:23:59.365381638Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 27 08:23:59.365821 containerd[1614]: time="2025-10-27T08:23:59.365802701Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 27 08:23:59.365905 containerd[1614]: time="2025-10-27T08:23:59.365892847Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 27 08:23:59.365976 containerd[1614]: time="2025-10-27T08:23:59.365964613Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 27 08:23:59.366089 containerd[1614]: time="2025-10-27T08:23:59.366075331Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 27 08:23:59.366165 containerd[1614]: time="2025-10-27T08:23:59.366153315Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 27 08:23:59.367696 containerd[1614]: time="2025-10-27T08:23:59.366204320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 27 08:23:59.367696 containerd[1614]: time="2025-10-27T08:23:59.367157430Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 27 08:23:59.367696 containerd[1614]: time="2025-10-27T08:23:59.367225202Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 27 08:23:59.367696 containerd[1614]: time="2025-10-27T08:23:59.367240773Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 27 08:23:59.367696 containerd[1614]: time="2025-10-27T08:23:59.367251395Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 27 08:23:59.367696 containerd[1614]: time="2025-10-27T08:23:59.367321139Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 27 08:23:59.367696 containerd[1614]: time="2025-10-27T08:23:59.367338734Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 27 08:23:59.367696 containerd[1614]: time="2025-10-27T08:23:59.367348794Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 27 08:23:59.367696 containerd[1614]: time="2025-10-27T08:23:59.367369432Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 27 08:23:59.367696 containerd[1614]: time="2025-10-27T08:23:59.367377429Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 27 08:23:59.367696 containerd[1614]: time="2025-10-27T08:23:59.367386267Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 27 08:23:59.367696 containerd[1614]: time="2025-10-27T08:23:59.367396561Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 27 08:23:59.367696 containerd[1614]: time="2025-10-27T08:23:59.367413430Z" level=info msg="runtime interface created" Oct 27 08:23:59.367696 containerd[1614]: time="2025-10-27T08:23:59.367418640Z" level=info msg="created NRI interface" Oct 27 08:23:59.367696 containerd[1614]: time="2025-10-27T08:23:59.367426588Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 27 08:23:59.368042 containerd[1614]: time="2025-10-27T08:23:59.367450594Z" level=info msg="Connect containerd service" Oct 27 08:23:59.368042 containerd[1614]: time="2025-10-27T08:23:59.367479742Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 27 08:23:59.370837 containerd[1614]: time="2025-10-27T08:23:59.370809168Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 27 08:23:59.538266 sshd_keygen[1599]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 27 08:23:59.595538 containerd[1614]: time="2025-10-27T08:23:59.595428884Z" level=info msg="Start subscribing containerd event" Oct 27 08:23:59.595538 containerd[1614]: time="2025-10-27T08:23:59.595497969Z" level=info msg="Start recovering state" Oct 27 08:23:59.595697 containerd[1614]: time="2025-10-27T08:23:59.595626416Z" level=info msg="Start event monitor" Oct 27 08:23:59.595697 containerd[1614]: time="2025-10-27T08:23:59.595641599Z" level=info msg="Start cni network conf syncer for default" Oct 27 08:23:59.595697 containerd[1614]: time="2025-10-27T08:23:59.595648970Z" level=info msg="Start streaming server" Oct 27 08:23:59.595697 containerd[1614]: time="2025-10-27T08:23:59.595658371Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 27 08:23:59.595697 containerd[1614]: time="2025-10-27T08:23:59.595665471Z" level=info msg="runtime interface starting up..." Oct 27 08:23:59.595697 containerd[1614]: time="2025-10-27T08:23:59.595671543Z" level=info msg="starting plugins..." Oct 27 08:23:59.595697 containerd[1614]: time="2025-10-27T08:23:59.595684959Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 27 08:23:59.597834 containerd[1614]: time="2025-10-27T08:23:59.597263563Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 27 08:23:59.597834 containerd[1614]: time="2025-10-27T08:23:59.597339498Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 27 08:23:59.597834 containerd[1614]: time="2025-10-27T08:23:59.597412666Z" level=info msg="containerd successfully booted in 0.329987s" Oct 27 08:23:59.597647 systemd[1]: Started containerd.service - containerd container runtime. Oct 27 08:23:59.606482 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 27 08:23:59.613383 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 27 08:23:59.640858 systemd[1]: issuegen.service: Deactivated successfully. Oct 27 08:23:59.641387 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 27 08:23:59.648662 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 27 08:23:59.666155 tar[1588]: linux-amd64/README.md Oct 27 08:23:59.676547 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 27 08:23:59.685852 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 27 08:23:59.693428 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 27 08:23:59.694742 systemd[1]: Reached target getty.target - Login Prompts. Oct 27 08:23:59.699314 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 27 08:23:59.795390 systemd-networkd[1482]: eth0: Gained IPv6LL Oct 27 08:23:59.797076 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Oct 27 08:23:59.800297 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 27 08:23:59.801971 systemd[1]: Reached target network-online.target - Network is Online. Oct 27 08:23:59.806725 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:23:59.813494 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 27 08:23:59.847307 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 27 08:23:59.859299 systemd-networkd[1482]: eth1: Gained IPv6LL Oct 27 08:23:59.860772 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Oct 27 08:24:01.043523 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:24:01.045513 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 27 08:24:01.048378 systemd[1]: Startup finished in 2.444s (kernel) + 5.169s (initrd) + 6.187s (userspace) = 13.801s. Oct 27 08:24:01.067253 (kubelet)[1721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 08:24:01.936621 kubelet[1721]: E1027 08:24:01.936551 1721 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 08:24:01.940783 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 08:24:01.941029 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 08:24:01.942696 systemd[1]: kubelet.service: Consumed 1.490s CPU time, 267.1M memory peak. Oct 27 08:24:03.374524 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 27 08:24:03.377489 systemd[1]: Started sshd@0-143.198.224.48:22-139.178.89.65:34904.service - OpenSSH per-connection server daemon (139.178.89.65:34904). Oct 27 08:24:03.523665 sshd[1732]: Accepted publickey for core from 139.178.89.65 port 34904 ssh2: RSA SHA256:rxa87oi8ZZqMD8URaMdjWEem69/UDQnMWUTPMulZcos Oct 27 08:24:03.526969 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:24:03.550753 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 27 08:24:03.550843 systemd-logind[1572]: New session 1 of user core. Oct 27 08:24:03.554360 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 27 08:24:03.592502 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 27 08:24:03.596773 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 27 08:24:03.617552 (systemd)[1737]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 27 08:24:03.622550 systemd-logind[1572]: New session c1 of user core. Oct 27 08:24:03.847212 systemd[1737]: Queued start job for default target default.target. Oct 27 08:24:03.856936 systemd[1737]: Created slice app.slice - User Application Slice. Oct 27 08:24:03.857256 systemd[1737]: Reached target paths.target - Paths. Oct 27 08:24:03.857400 systemd[1737]: Reached target timers.target - Timers. Oct 27 08:24:03.859939 systemd[1737]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 27 08:24:03.883305 systemd[1737]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 27 08:24:03.883450 systemd[1737]: Reached target sockets.target - Sockets. Oct 27 08:24:03.883516 systemd[1737]: Reached target basic.target - Basic System. Oct 27 08:24:03.883558 systemd[1737]: Reached target default.target - Main User Target. Oct 27 08:24:03.883597 systemd[1737]: Startup finished in 247ms. Oct 27 08:24:03.884034 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 27 08:24:03.897541 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 27 08:24:03.977318 systemd[1]: Started sshd@1-143.198.224.48:22-139.178.89.65:34914.service - OpenSSH per-connection server daemon (139.178.89.65:34914). Oct 27 08:24:04.051704 sshd[1748]: Accepted publickey for core from 139.178.89.65 port 34914 ssh2: RSA SHA256:rxa87oi8ZZqMD8URaMdjWEem69/UDQnMWUTPMulZcos Oct 27 08:24:04.054383 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:24:04.060505 systemd-logind[1572]: New session 2 of user core. Oct 27 08:24:04.068521 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 27 08:24:04.135080 sshd[1751]: Connection closed by 139.178.89.65 port 34914 Oct 27 08:24:04.136074 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Oct 27 08:24:04.148625 systemd[1]: sshd@1-143.198.224.48:22-139.178.89.65:34914.service: Deactivated successfully. Oct 27 08:24:04.152240 systemd[1]: session-2.scope: Deactivated successfully. Oct 27 08:24:04.155643 systemd-logind[1572]: Session 2 logged out. Waiting for processes to exit. Oct 27 08:24:04.158911 systemd-logind[1572]: Removed session 2. Oct 27 08:24:04.162167 systemd[1]: Started sshd@2-143.198.224.48:22-139.178.89.65:34924.service - OpenSSH per-connection server daemon (139.178.89.65:34924). Oct 27 08:24:04.242649 sshd[1757]: Accepted publickey for core from 139.178.89.65 port 34924 ssh2: RSA SHA256:rxa87oi8ZZqMD8URaMdjWEem69/UDQnMWUTPMulZcos Oct 27 08:24:04.245038 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:24:04.255957 systemd-logind[1572]: New session 3 of user core. Oct 27 08:24:04.261674 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 27 08:24:04.322195 sshd[1760]: Connection closed by 139.178.89.65 port 34924 Oct 27 08:24:04.322924 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Oct 27 08:24:04.337402 systemd[1]: sshd@2-143.198.224.48:22-139.178.89.65:34924.service: Deactivated successfully. Oct 27 08:24:04.341599 systemd[1]: session-3.scope: Deactivated successfully. Oct 27 08:24:04.344243 systemd-logind[1572]: Session 3 logged out. Waiting for processes to exit. Oct 27 08:24:04.348736 systemd[1]: Started sshd@3-143.198.224.48:22-139.178.89.65:34932.service - OpenSSH per-connection server daemon (139.178.89.65:34932). Oct 27 08:24:04.349901 systemd-logind[1572]: Removed session 3. Oct 27 08:24:04.419473 sshd[1766]: Accepted publickey for core from 139.178.89.65 port 34932 ssh2: RSA SHA256:rxa87oi8ZZqMD8URaMdjWEem69/UDQnMWUTPMulZcos Oct 27 08:24:04.422470 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:24:04.433776 systemd-logind[1572]: New session 4 of user core. Oct 27 08:24:04.450714 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 27 08:24:04.516797 sshd[1769]: Connection closed by 139.178.89.65 port 34932 Oct 27 08:24:04.517584 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Oct 27 08:24:04.530428 systemd[1]: sshd@3-143.198.224.48:22-139.178.89.65:34932.service: Deactivated successfully. Oct 27 08:24:04.533326 systemd[1]: session-4.scope: Deactivated successfully. Oct 27 08:24:04.534913 systemd-logind[1572]: Session 4 logged out. Waiting for processes to exit. Oct 27 08:24:04.540648 systemd[1]: Started sshd@4-143.198.224.48:22-139.178.89.65:34946.service - OpenSSH per-connection server daemon (139.178.89.65:34946). Oct 27 08:24:04.541925 systemd-logind[1572]: Removed session 4. Oct 27 08:24:04.613092 sshd[1775]: Accepted publickey for core from 139.178.89.65 port 34946 ssh2: RSA SHA256:rxa87oi8ZZqMD8URaMdjWEem69/UDQnMWUTPMulZcos Oct 27 08:24:04.615680 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:24:04.622665 systemd-logind[1572]: New session 5 of user core. Oct 27 08:24:04.632560 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 27 08:24:04.707381 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 27 08:24:04.707701 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 08:24:04.723917 sudo[1779]: pam_unix(sudo:session): session closed for user root Oct 27 08:24:04.727639 sshd[1778]: Connection closed by 139.178.89.65 port 34946 Oct 27 08:24:04.730546 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Oct 27 08:24:04.742393 systemd[1]: sshd@4-143.198.224.48:22-139.178.89.65:34946.service: Deactivated successfully. Oct 27 08:24:04.745762 systemd[1]: session-5.scope: Deactivated successfully. Oct 27 08:24:04.746979 systemd-logind[1572]: Session 5 logged out. Waiting for processes to exit. Oct 27 08:24:04.751062 systemd[1]: Started sshd@5-143.198.224.48:22-139.178.89.65:34956.service - OpenSSH per-connection server daemon (139.178.89.65:34956). Oct 27 08:24:04.752091 systemd-logind[1572]: Removed session 5. Oct 27 08:24:04.824435 sshd[1785]: Accepted publickey for core from 139.178.89.65 port 34956 ssh2: RSA SHA256:rxa87oi8ZZqMD8URaMdjWEem69/UDQnMWUTPMulZcos Oct 27 08:24:04.826728 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:24:04.833558 systemd-logind[1572]: New session 6 of user core. Oct 27 08:24:04.843591 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 27 08:24:04.907218 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 27 08:24:04.907640 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 08:24:04.914294 sudo[1790]: pam_unix(sudo:session): session closed for user root Oct 27 08:24:04.923889 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 27 08:24:04.924443 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 08:24:04.937897 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 27 08:24:04.998383 augenrules[1812]: No rules Oct 27 08:24:04.999779 systemd[1]: audit-rules.service: Deactivated successfully. Oct 27 08:24:05.000017 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 27 08:24:05.002424 sudo[1789]: pam_unix(sudo:session): session closed for user root Oct 27 08:24:05.006826 sshd[1788]: Connection closed by 139.178.89.65 port 34956 Oct 27 08:24:05.007735 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Oct 27 08:24:05.017223 systemd[1]: sshd@5-143.198.224.48:22-139.178.89.65:34956.service: Deactivated successfully. Oct 27 08:24:05.019668 systemd[1]: session-6.scope: Deactivated successfully. Oct 27 08:24:05.021353 systemd-logind[1572]: Session 6 logged out. Waiting for processes to exit. Oct 27 08:24:05.025699 systemd[1]: Started sshd@6-143.198.224.48:22-139.178.89.65:34970.service - OpenSSH per-connection server daemon (139.178.89.65:34970). Oct 27 08:24:05.027078 systemd-logind[1572]: Removed session 6. Oct 27 08:24:05.106050 sshd[1821]: Accepted publickey for core from 139.178.89.65 port 34970 ssh2: RSA SHA256:rxa87oi8ZZqMD8URaMdjWEem69/UDQnMWUTPMulZcos Oct 27 08:24:05.108058 sshd-session[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:24:05.115403 systemd-logind[1572]: New session 7 of user core. Oct 27 08:24:05.122579 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 27 08:24:05.189131 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 27 08:24:05.189617 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 08:24:05.784755 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 27 08:24:05.804917 (dockerd)[1845]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 27 08:24:06.202895 dockerd[1845]: time="2025-10-27T08:24:06.202798002Z" level=info msg="Starting up" Oct 27 08:24:06.204218 dockerd[1845]: time="2025-10-27T08:24:06.204177846Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 27 08:24:06.224411 dockerd[1845]: time="2025-10-27T08:24:06.224324491Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 27 08:24:06.246172 systemd[1]: var-lib-docker-metacopy\x2dcheck2259874568-merged.mount: Deactivated successfully. Oct 27 08:24:06.267575 dockerd[1845]: time="2025-10-27T08:24:06.267515662Z" level=info msg="Loading containers: start." Oct 27 08:24:06.280155 kernel: Initializing XFRM netlink socket Oct 27 08:24:06.532018 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Oct 27 08:24:06.532729 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Oct 27 08:24:06.549376 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Oct 27 08:24:06.594519 systemd-networkd[1482]: docker0: Link UP Oct 27 08:24:06.595388 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Oct 27 08:24:06.597836 dockerd[1845]: time="2025-10-27T08:24:06.597778327Z" level=info msg="Loading containers: done." Oct 27 08:24:06.618769 dockerd[1845]: time="2025-10-27T08:24:06.618688720Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 27 08:24:06.618947 dockerd[1845]: time="2025-10-27T08:24:06.618840940Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 27 08:24:06.618947 dockerd[1845]: time="2025-10-27T08:24:06.618942306Z" level=info msg="Initializing buildkit" Oct 27 08:24:06.620404 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2733353268-merged.mount: Deactivated successfully. Oct 27 08:24:06.642826 dockerd[1845]: time="2025-10-27T08:24:06.642765821Z" level=info msg="Completed buildkit initialization" Oct 27 08:24:06.654669 dockerd[1845]: time="2025-10-27T08:24:06.654579648Z" level=info msg="Daemon has completed initialization" Oct 27 08:24:06.655131 dockerd[1845]: time="2025-10-27T08:24:06.654918544Z" level=info msg="API listen on /run/docker.sock" Oct 27 08:24:06.655561 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 27 08:24:07.557071 containerd[1614]: time="2025-10-27T08:24:07.557005166Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 27 08:24:08.189722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1035321596.mount: Deactivated successfully. Oct 27 08:24:09.853386 containerd[1614]: time="2025-10-27T08:24:09.853216341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:24:09.859935 containerd[1614]: time="2025-10-27T08:24:09.859687528Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Oct 27 08:24:09.865620 containerd[1614]: time="2025-10-27T08:24:09.865554708Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:24:09.873595 containerd[1614]: time="2025-10-27T08:24:09.873525780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:24:09.876336 containerd[1614]: time="2025-10-27T08:24:09.876281737Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.319229233s" Oct 27 08:24:09.876336 containerd[1614]: time="2025-10-27T08:24:09.876333972Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Oct 27 08:24:09.881833 containerd[1614]: time="2025-10-27T08:24:09.881569916Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 27 08:24:11.298095 containerd[1614]: time="2025-10-27T08:24:11.297994399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:24:11.299667 containerd[1614]: time="2025-10-27T08:24:11.299616278Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Oct 27 08:24:11.300636 containerd[1614]: time="2025-10-27T08:24:11.300575562Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:24:11.309140 containerd[1614]: time="2025-10-27T08:24:11.307768125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:24:11.309344 containerd[1614]: time="2025-10-27T08:24:11.309158711Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.427120712s" Oct 27 08:24:11.309344 containerd[1614]: time="2025-10-27T08:24:11.309200432Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Oct 27 08:24:11.310452 containerd[1614]: time="2025-10-27T08:24:11.310411323Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 27 08:24:12.109015 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 27 08:24:12.114454 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:24:12.315323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:24:12.326570 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 08:24:12.398242 kubelet[2136]: E1027 08:24:12.395813 2136 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 08:24:12.401103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 08:24:12.401433 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 08:24:12.402214 systemd[1]: kubelet.service: Consumed 236ms CPU time, 110.5M memory peak. Oct 27 08:24:12.779010 containerd[1614]: time="2025-10-27T08:24:12.778694532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:24:12.780469 containerd[1614]: time="2025-10-27T08:24:12.780428533Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Oct 27 08:24:12.780578 containerd[1614]: time="2025-10-27T08:24:12.780567983Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:24:12.784319 containerd[1614]: time="2025-10-27T08:24:12.784269170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:24:12.789754 containerd[1614]: time="2025-10-27T08:24:12.789165447Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.47871403s" Oct 27 08:24:12.789754 containerd[1614]: time="2025-10-27T08:24:12.789226040Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Oct 27 08:24:12.790760 containerd[1614]: time="2025-10-27T08:24:12.790723950Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 27 08:24:13.823616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2322181139.mount: Deactivated successfully. Oct 27 08:24:14.358442 containerd[1614]: time="2025-10-27T08:24:14.358378161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:24:14.359360 containerd[1614]: time="2025-10-27T08:24:14.359188605Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Oct 27 08:24:14.359908 containerd[1614]: time="2025-10-27T08:24:14.359876833Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:24:14.361416 containerd[1614]: time="2025-10-27T08:24:14.361385089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:24:14.362640 containerd[1614]: time="2025-10-27T08:24:14.362022253Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.571260495s" Oct 27 08:24:14.362640 containerd[1614]: time="2025-10-27T08:24:14.362333026Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Oct 27 08:24:14.363018 containerd[1614]: time="2025-10-27T08:24:14.362996235Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 27 08:24:14.365448 systemd-resolved[1283]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Oct 27 08:24:14.882868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2019495161.mount: Deactivated successfully. Oct 27 08:24:15.827171 containerd[1614]: time="2025-10-27T08:24:15.827102667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:24:15.827974 containerd[1614]: time="2025-10-27T08:24:15.827930284Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Oct 27 08:24:15.828708 containerd[1614]: time="2025-10-27T08:24:15.828679688Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:24:15.831202 containerd[1614]: time="2025-10-27T08:24:15.831169535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:24:15.832673 containerd[1614]: time="2025-10-27T08:24:15.832300510Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.46920532s" Oct 27 08:24:15.832673 containerd[1614]: time="2025-10-27T08:24:15.832333086Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Oct 27 08:24:15.833334 containerd[1614]: time="2025-10-27T08:24:15.833244758Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 27 08:24:16.248908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3770532425.mount: Deactivated successfully. Oct 27 08:24:16.254230 containerd[1614]: time="2025-10-27T08:24:16.254169714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 08:24:16.254967 containerd[1614]: time="2025-10-27T08:24:16.254928617Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 27 08:24:16.256156 containerd[1614]: time="2025-10-27T08:24:16.255550707Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 08:24:16.257132 containerd[1614]: time="2025-10-27T08:24:16.257057505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 08:24:16.257879 containerd[1614]: time="2025-10-27T08:24:16.257684612Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 424.412998ms" Oct 27 08:24:16.257879 containerd[1614]: time="2025-10-27T08:24:16.257716638Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 27 08:24:16.258279 containerd[1614]: time="2025-10-27T08:24:16.258257230Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 27 08:24:16.748905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2374421828.mount: Deactivated successfully. Oct 27 08:24:17.459370 systemd-resolved[1283]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Oct 27 08:24:19.698146 containerd[1614]: time="2025-10-27T08:24:19.697314934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:24:19.698976 containerd[1614]: time="2025-10-27T08:24:19.698929371Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Oct 27 08:24:19.699235 containerd[1614]: time="2025-10-27T08:24:19.699213755Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:24:19.706193 containerd[1614]: time="2025-10-27T08:24:19.706147154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:24:19.706462 containerd[1614]: time="2025-10-27T08:24:19.706276246Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.447987682s" Oct 27 08:24:19.706512 containerd[1614]: time="2025-10-27T08:24:19.706471456Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Oct 27 08:24:22.608583 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 27 08:24:22.612409 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:24:22.792305 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:24:22.801538 (kubelet)[2290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 08:24:22.862526 kubelet[2290]: E1027 08:24:22.862364 2290 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 08:24:22.868985 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 08:24:22.869229 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 08:24:22.870104 systemd[1]: kubelet.service: Consumed 182ms CPU time, 110.2M memory peak. Oct 27 08:24:23.821559 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:24:23.822153 systemd[1]: kubelet.service: Consumed 182ms CPU time, 110.2M memory peak. Oct 27 08:24:23.824976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:24:23.856107 systemd[1]: Reload requested from client PID 2305 ('systemctl') (unit session-7.scope)... Oct 27 08:24:23.856140 systemd[1]: Reloading... Oct 27 08:24:23.987695 zram_generator::config[2346]: No configuration found. Oct 27 08:24:24.270649 systemd[1]: Reloading finished in 414 ms. Oct 27 08:24:24.330845 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 27 08:24:24.330938 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 27 08:24:24.331499 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:24:24.331556 systemd[1]: kubelet.service: Consumed 131ms CPU time, 98.1M memory peak. Oct 27 08:24:24.333407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:24:24.500389 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:24:24.512636 (kubelet)[2403]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 27 08:24:24.568177 kubelet[2403]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 08:24:24.568177 kubelet[2403]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 27 08:24:24.568177 kubelet[2403]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 08:24:24.568543 kubelet[2403]: I1027 08:24:24.568137 2403 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 27 08:24:25.051522 kubelet[2403]: I1027 08:24:25.051453 2403 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 27 08:24:25.051522 kubelet[2403]: I1027 08:24:25.051491 2403 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 27 08:24:25.051846 kubelet[2403]: I1027 08:24:25.051815 2403 server.go:956] "Client rotation is on, will bootstrap in background" Oct 27 08:24:25.093213 kubelet[2403]: I1027 08:24:25.093155 2403 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 27 08:24:25.094150 kubelet[2403]: E1027 08:24:25.094087 2403 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://143.198.224.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.198.224.48:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 27 08:24:25.109462 kubelet[2403]: I1027 08:24:25.109429 2403 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 27 08:24:25.116231 kubelet[2403]: I1027 08:24:25.116179 2403 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 27 08:24:25.119107 kubelet[2403]: I1027 08:24:25.119031 2403 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 27 08:24:25.122725 kubelet[2403]: I1027 08:24:25.119091 2403 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-9999.9.9-k-8ed45c9b51","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 27 08:24:25.122725 kubelet[2403]: I1027 08:24:25.122737 2403 topology_manager.go:138] "Creating topology manager with none policy" Oct 27 08:24:25.123013 kubelet[2403]: I1027 08:24:25.122759 2403 container_manager_linux.go:303] "Creating device plugin manager" Oct 27 08:24:25.123013 kubelet[2403]: I1027 08:24:25.122957 2403 state_mem.go:36] "Initialized new in-memory state store" Oct 27 08:24:25.126399 kubelet[2403]: I1027 08:24:25.125972 2403 kubelet.go:480] "Attempting to sync node with API server" Oct 27 08:24:25.126399 kubelet[2403]: I1027 08:24:25.126041 2403 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 27 08:24:25.126399 kubelet[2403]: I1027 08:24:25.126084 2403 kubelet.go:386] "Adding apiserver pod source" Oct 27 08:24:25.128409 kubelet[2403]: I1027 08:24:25.127974 2403 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 27 08:24:25.135638 kubelet[2403]: E1027 08:24:25.135601 2403 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://143.198.224.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-9999.9.9-k-8ed45c9b51&limit=500&resourceVersion=0\": dial tcp 143.198.224.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 27 08:24:25.139222 kubelet[2403]: E1027 08:24:25.139182 2403 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://143.198.224.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.224.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 27 08:24:25.139531 kubelet[2403]: I1027 08:24:25.139506 2403 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 27 08:24:25.140185 kubelet[2403]: I1027 08:24:25.140166 2403 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 27 08:24:25.141162 kubelet[2403]: W1027 08:24:25.140985 2403 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 27 08:24:25.148825 kubelet[2403]: I1027 08:24:25.148793 2403 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 27 08:24:25.148956 kubelet[2403]: I1027 08:24:25.148876 2403 server.go:1289] "Started kubelet" Oct 27 08:24:25.152444 kubelet[2403]: I1027 08:24:25.151880 2403 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 27 08:24:25.152444 kubelet[2403]: I1027 08:24:25.152230 2403 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 27 08:24:25.152444 kubelet[2403]: I1027 08:24:25.152433 2403 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 27 08:24:25.160458 kubelet[2403]: I1027 08:24:25.160407 2403 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 27 08:24:25.162042 kubelet[2403]: I1027 08:24:25.162018 2403 server.go:317] "Adding debug handlers to kubelet server" Oct 27 08:24:25.165995 kubelet[2403]: I1027 08:24:25.165968 2403 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 27 08:24:25.167946 kubelet[2403]: I1027 08:24:25.167912 2403 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 27 08:24:25.168723 kubelet[2403]: E1027 08:24:25.168677 2403 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-9999.9.9-k-8ed45c9b51\" not found" Oct 27 08:24:25.169572 kubelet[2403]: E1027 08:24:25.165837 2403 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.224.48:6443/api/v1/namespaces/default/events\": dial tcp 143.198.224.48:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-9999.9.9-k-8ed45c9b51.18724b8b1f0dd7ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-9999.9.9-k-8ed45c9b51,UID:ci-9999.9.9-k-8ed45c9b51,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-9999.9.9-k-8ed45c9b51,},FirstTimestamp:2025-10-27 08:24:25.148823534 +0000 UTC m=+0.629758516,LastTimestamp:2025-10-27 08:24:25.148823534 +0000 UTC m=+0.629758516,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-9999.9.9-k-8ed45c9b51,}" Oct 27 08:24:25.171580 kubelet[2403]: I1027 08:24:25.171308 2403 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 27 08:24:25.172053 kubelet[2403]: E1027 08:24:25.172018 2403 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.224.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999.9.9-k-8ed45c9b51?timeout=10s\": dial tcp 143.198.224.48:6443: connect: connection refused" interval="200ms" Oct 27 08:24:25.173035 kubelet[2403]: I1027 08:24:25.172899 2403 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 27 08:24:25.173235 kubelet[2403]: I1027 08:24:25.173223 2403 reconciler.go:26] "Reconciler: start to sync state" Oct 27 08:24:25.175022 kubelet[2403]: E1027 08:24:25.174492 2403 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://143.198.224.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.224.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 27 08:24:25.178415 kubelet[2403]: I1027 08:24:25.178390 2403 factory.go:223] Registration of the containerd container factory successfully Oct 27 08:24:25.178563 kubelet[2403]: I1027 08:24:25.178552 2403 factory.go:223] Registration of the systemd container factory successfully Oct 27 08:24:25.199289 kubelet[2403]: I1027 08:24:25.199230 2403 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 27 08:24:25.200914 kubelet[2403]: I1027 08:24:25.200884 2403 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 27 08:24:25.200914 kubelet[2403]: I1027 08:24:25.200903 2403 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 27 08:24:25.200914 kubelet[2403]: I1027 08:24:25.200922 2403 state_mem.go:36] "Initialized new in-memory state store" Oct 27 08:24:25.201199 kubelet[2403]: I1027 08:24:25.201183 2403 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 27 08:24:25.201249 kubelet[2403]: I1027 08:24:25.201204 2403 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 27 08:24:25.201249 kubelet[2403]: I1027 08:24:25.201226 2403 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 27 08:24:25.201249 kubelet[2403]: I1027 08:24:25.201240 2403 kubelet.go:2436] "Starting kubelet main sync loop" Oct 27 08:24:25.201414 kubelet[2403]: E1027 08:24:25.201329 2403 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 27 08:24:25.202372 kubelet[2403]: I1027 08:24:25.202350 2403 policy_none.go:49] "None policy: Start" Oct 27 08:24:25.202448 kubelet[2403]: I1027 08:24:25.202382 2403 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 27 08:24:25.202448 kubelet[2403]: I1027 08:24:25.202399 2403 state_mem.go:35] "Initializing new in-memory state store" Oct 27 08:24:25.209067 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 27 08:24:25.212946 kubelet[2403]: E1027 08:24:25.212838 2403 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://143.198.224.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.224.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 27 08:24:25.220559 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 27 08:24:25.223859 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 27 08:24:25.242598 kubelet[2403]: E1027 08:24:25.242566 2403 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 27 08:24:25.243415 kubelet[2403]: I1027 08:24:25.243351 2403 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 27 08:24:25.243591 kubelet[2403]: I1027 08:24:25.243497 2403 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 27 08:24:25.244446 kubelet[2403]: I1027 08:24:25.244427 2403 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 27 08:24:25.246085 kubelet[2403]: E1027 08:24:25.246069 2403 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 27 08:24:25.246245 kubelet[2403]: E1027 08:24:25.246199 2403 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-9999.9.9-k-8ed45c9b51\" not found" Oct 27 08:24:25.314908 systemd[1]: Created slice kubepods-burstable-pod4adc42a34e1ea9b384df441b3c4e1776.slice - libcontainer container kubepods-burstable-pod4adc42a34e1ea9b384df441b3c4e1776.slice. Oct 27 08:24:25.333969 kubelet[2403]: E1027 08:24:25.333055 2403 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999.9.9-k-8ed45c9b51\" not found" node="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:25.336623 systemd[1]: Created slice kubepods-burstable-pode6b0b77f0401d1cf0c030b0855fc60ae.slice - libcontainer container kubepods-burstable-pode6b0b77f0401d1cf0c030b0855fc60ae.slice. Oct 27 08:24:25.340359 kubelet[2403]: E1027 08:24:25.340335 2403 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999.9.9-k-8ed45c9b51\" not found" node="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:25.341430 systemd[1]: Created slice kubepods-burstable-pod658141d1b7ffe3fafa6b8461dea3aac4.slice - libcontainer container kubepods-burstable-pod658141d1b7ffe3fafa6b8461dea3aac4.slice. Oct 27 08:24:25.344762 kubelet[2403]: E1027 08:24:25.344733 2403 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999.9.9-k-8ed45c9b51\" not found" node="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:25.347652 kubelet[2403]: I1027 08:24:25.347547 2403 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:25.348161 kubelet[2403]: E1027 08:24:25.348134 2403 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.224.48:6443/api/v1/nodes\": dial tcp 143.198.224.48:6443: connect: connection refused" node="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:25.372783 kubelet[2403]: E1027 08:24:25.372726 2403 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.224.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999.9.9-k-8ed45c9b51?timeout=10s\": dial tcp 143.198.224.48:6443: connect: connection refused" interval="400ms" Oct 27 08:24:25.375294 kubelet[2403]: I1027 08:24:25.375126 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6b0b77f0401d1cf0c030b0855fc60ae-usr-share-ca-certificates\") pod \"kube-apiserver-ci-9999.9.9-k-8ed45c9b51\" (UID: \"e6b0b77f0401d1cf0c030b0855fc60ae\") " pod="kube-system/kube-apiserver-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:25.375294 kubelet[2403]: I1027 08:24:25.375171 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/658141d1b7ffe3fafa6b8461dea3aac4-ca-certs\") pod \"kube-controller-manager-ci-9999.9.9-k-8ed45c9b51\" (UID: \"658141d1b7ffe3fafa6b8461dea3aac4\") " pod="kube-system/kube-controller-manager-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:25.375294 kubelet[2403]: I1027 08:24:25.375190 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/658141d1b7ffe3fafa6b8461dea3aac4-flexvolume-dir\") pod \"kube-controller-manager-ci-9999.9.9-k-8ed45c9b51\" (UID: \"658141d1b7ffe3fafa6b8461dea3aac4\") " pod="kube-system/kube-controller-manager-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:25.375294 kubelet[2403]: I1027 08:24:25.375205 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/658141d1b7ffe3fafa6b8461dea3aac4-kubeconfig\") pod \"kube-controller-manager-ci-9999.9.9-k-8ed45c9b51\" (UID: \"658141d1b7ffe3fafa6b8461dea3aac4\") " pod="kube-system/kube-controller-manager-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:25.375294 kubelet[2403]: I1027 08:24:25.375221 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/658141d1b7ffe3fafa6b8461dea3aac4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-9999.9.9-k-8ed45c9b51\" (UID: \"658141d1b7ffe3fafa6b8461dea3aac4\") " pod="kube-system/kube-controller-manager-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:25.375589 kubelet[2403]: I1027 08:24:25.375241 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4adc42a34e1ea9b384df441b3c4e1776-kubeconfig\") pod \"kube-scheduler-ci-9999.9.9-k-8ed45c9b51\" (UID: \"4adc42a34e1ea9b384df441b3c4e1776\") " pod="kube-system/kube-scheduler-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:25.375589 kubelet[2403]: I1027 08:24:25.375283 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e6b0b77f0401d1cf0c030b0855fc60ae-ca-certs\") pod \"kube-apiserver-ci-9999.9.9-k-8ed45c9b51\" (UID: \"e6b0b77f0401d1cf0c030b0855fc60ae\") " pod="kube-system/kube-apiserver-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:25.375589 kubelet[2403]: I1027 08:24:25.375299 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e6b0b77f0401d1cf0c030b0855fc60ae-k8s-certs\") pod \"kube-apiserver-ci-9999.9.9-k-8ed45c9b51\" (UID: \"e6b0b77f0401d1cf0c030b0855fc60ae\") " pod="kube-system/kube-apiserver-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:25.375589 kubelet[2403]: I1027 08:24:25.375322 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/658141d1b7ffe3fafa6b8461dea3aac4-k8s-certs\") pod \"kube-controller-manager-ci-9999.9.9-k-8ed45c9b51\" (UID: \"658141d1b7ffe3fafa6b8461dea3aac4\") " pod="kube-system/kube-controller-manager-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:25.549421 kubelet[2403]: I1027 08:24:25.549374 2403 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:25.549936 kubelet[2403]: E1027 08:24:25.549900 2403 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.224.48:6443/api/v1/nodes\": dial tcp 143.198.224.48:6443: connect: connection refused" node="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:25.635143 kubelet[2403]: E1027 08:24:25.635094 2403 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:25.638055 containerd[1614]: time="2025-10-27T08:24:25.637745610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-9999.9.9-k-8ed45c9b51,Uid:4adc42a34e1ea9b384df441b3c4e1776,Namespace:kube-system,Attempt:0,}" Oct 27 08:24:25.641279 kubelet[2403]: E1027 08:24:25.641191 2403 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:25.646610 containerd[1614]: time="2025-10-27T08:24:25.646551219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-9999.9.9-k-8ed45c9b51,Uid:e6b0b77f0401d1cf0c030b0855fc60ae,Namespace:kube-system,Attempt:0,}" Oct 27 08:24:25.649096 kubelet[2403]: E1027 08:24:25.648812 2403 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:25.649495 containerd[1614]: time="2025-10-27T08:24:25.649452899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-9999.9.9-k-8ed45c9b51,Uid:658141d1b7ffe3fafa6b8461dea3aac4,Namespace:kube-system,Attempt:0,}" Oct 27 08:24:25.771327 containerd[1614]: time="2025-10-27T08:24:25.769389189Z" level=info msg="connecting to shim e4d46f12190f7989cf721031d7e2c2b8e847d02561ccc3d8c2a125962b510ff2" address="unix:///run/containerd/s/f85e2b0b014eb4227f6676fe7ef010c3836fc1bb2d61b235919136c1bfd96bb0" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:24:25.773380 kubelet[2403]: E1027 08:24:25.773321 2403 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.224.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999.9.9-k-8ed45c9b51?timeout=10s\": dial tcp 143.198.224.48:6443: connect: connection refused" interval="800ms" Oct 27 08:24:25.780344 containerd[1614]: time="2025-10-27T08:24:25.780289061Z" level=info msg="connecting to shim 2d7690c7a9868129317d988fdfa856da0946df64e654e330ad1ad045cee59b75" address="unix:///run/containerd/s/e58b8ca6eea42f721a8d43d51a2b86e74cc7fb7e3ab6f382d0ea586b01a5f939" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:24:25.781350 containerd[1614]: time="2025-10-27T08:24:25.781239709Z" level=info msg="connecting to shim ef152eeed104b4d757ceed159ad318e1e17cfc56b679a59442416a6ebd2721c4" address="unix:///run/containerd/s/63f5481a85570997e5b6ccffebb12b90a129fb84c019a0051cd8a80b3fa43444" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:24:25.899372 systemd[1]: Started cri-containerd-2d7690c7a9868129317d988fdfa856da0946df64e654e330ad1ad045cee59b75.scope - libcontainer container 2d7690c7a9868129317d988fdfa856da0946df64e654e330ad1ad045cee59b75. Oct 27 08:24:25.901262 systemd[1]: Started cri-containerd-e4d46f12190f7989cf721031d7e2c2b8e847d02561ccc3d8c2a125962b510ff2.scope - libcontainer container e4d46f12190f7989cf721031d7e2c2b8e847d02561ccc3d8c2a125962b510ff2. Oct 27 08:24:25.904245 systemd[1]: Started cri-containerd-ef152eeed104b4d757ceed159ad318e1e17cfc56b679a59442416a6ebd2721c4.scope - libcontainer container ef152eeed104b4d757ceed159ad318e1e17cfc56b679a59442416a6ebd2721c4. Oct 27 08:24:25.951343 kubelet[2403]: I1027 08:24:25.951160 2403 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:25.953174 kubelet[2403]: E1027 08:24:25.953111 2403 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.224.48:6443/api/v1/nodes\": dial tcp 143.198.224.48:6443: connect: connection refused" node="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:26.011519 kubelet[2403]: E1027 08:24:26.011460 2403 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://143.198.224.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.224.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 27 08:24:26.011988 containerd[1614]: time="2025-10-27T08:24:26.011940725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-9999.9.9-k-8ed45c9b51,Uid:658141d1b7ffe3fafa6b8461dea3aac4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef152eeed104b4d757ceed159ad318e1e17cfc56b679a59442416a6ebd2721c4\"" Oct 27 08:24:26.014527 kubelet[2403]: E1027 08:24:26.014393 2403 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:26.024855 containerd[1614]: time="2025-10-27T08:24:26.024789980Z" level=info msg="CreateContainer within sandbox \"ef152eeed104b4d757ceed159ad318e1e17cfc56b679a59442416a6ebd2721c4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 27 08:24:26.026286 containerd[1614]: time="2025-10-27T08:24:26.026246246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-9999.9.9-k-8ed45c9b51,Uid:e6b0b77f0401d1cf0c030b0855fc60ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d7690c7a9868129317d988fdfa856da0946df64e654e330ad1ad045cee59b75\"" Oct 27 08:24:26.027404 kubelet[2403]: E1027 08:24:26.027376 2403 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:26.033660 containerd[1614]: time="2025-10-27T08:24:26.033617334Z" level=info msg="CreateContainer within sandbox \"2d7690c7a9868129317d988fdfa856da0946df64e654e330ad1ad045cee59b75\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 27 08:24:26.040231 containerd[1614]: time="2025-10-27T08:24:26.039955709Z" level=info msg="Container 8888dff06e4af728786c52f8b9ced9fd5d5b109d2f7fef9cdf26e8abf64dc3dd: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:24:26.043359 containerd[1614]: time="2025-10-27T08:24:26.043311158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-9999.9.9-k-8ed45c9b51,Uid:4adc42a34e1ea9b384df441b3c4e1776,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4d46f12190f7989cf721031d7e2c2b8e847d02561ccc3d8c2a125962b510ff2\"" Oct 27 08:24:26.047365 containerd[1614]: time="2025-10-27T08:24:26.047306697Z" level=info msg="Container 5b44decb4c554938acca6f29db273fa79d341527632989f131359a2bb766f73a: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:24:26.047646 kubelet[2403]: E1027 08:24:26.047591 2403 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:26.052755 containerd[1614]: time="2025-10-27T08:24:26.052706190Z" level=info msg="CreateContainer within sandbox \"e4d46f12190f7989cf721031d7e2c2b8e847d02561ccc3d8c2a125962b510ff2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 27 08:24:26.057309 containerd[1614]: time="2025-10-27T08:24:26.057276956Z" level=info msg="CreateContainer within sandbox \"2d7690c7a9868129317d988fdfa856da0946df64e654e330ad1ad045cee59b75\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5b44decb4c554938acca6f29db273fa79d341527632989f131359a2bb766f73a\"" Oct 27 08:24:26.058008 containerd[1614]: time="2025-10-27T08:24:26.057968366Z" level=info msg="CreateContainer within sandbox \"ef152eeed104b4d757ceed159ad318e1e17cfc56b679a59442416a6ebd2721c4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8888dff06e4af728786c52f8b9ced9fd5d5b109d2f7fef9cdf26e8abf64dc3dd\"" Oct 27 08:24:26.058704 containerd[1614]: time="2025-10-27T08:24:26.058680653Z" level=info msg="StartContainer for \"5b44decb4c554938acca6f29db273fa79d341527632989f131359a2bb766f73a\"" Oct 27 08:24:26.059232 containerd[1614]: time="2025-10-27T08:24:26.059178272Z" level=info msg="StartContainer for \"8888dff06e4af728786c52f8b9ced9fd5d5b109d2f7fef9cdf26e8abf64dc3dd\"" Oct 27 08:24:26.060189 containerd[1614]: time="2025-10-27T08:24:26.060165846Z" level=info msg="connecting to shim 8888dff06e4af728786c52f8b9ced9fd5d5b109d2f7fef9cdf26e8abf64dc3dd" address="unix:///run/containerd/s/63f5481a85570997e5b6ccffebb12b90a129fb84c019a0051cd8a80b3fa43444" protocol=ttrpc version=3 Oct 27 08:24:26.061546 containerd[1614]: time="2025-10-27T08:24:26.061058284Z" level=info msg="connecting to shim 5b44decb4c554938acca6f29db273fa79d341527632989f131359a2bb766f73a" address="unix:///run/containerd/s/e58b8ca6eea42f721a8d43d51a2b86e74cc7fb7e3ab6f382d0ea586b01a5f939" protocol=ttrpc version=3 Oct 27 08:24:26.067494 containerd[1614]: time="2025-10-27T08:24:26.067448165Z" level=info msg="Container 35afba64198b6f80fe09599ecc3051b2cceea5800c6850c67c8b4974d0919723: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:24:26.078531 containerd[1614]: time="2025-10-27T08:24:26.078477990Z" level=info msg="CreateContainer within sandbox \"e4d46f12190f7989cf721031d7e2c2b8e847d02561ccc3d8c2a125962b510ff2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"35afba64198b6f80fe09599ecc3051b2cceea5800c6850c67c8b4974d0919723\"" Oct 27 08:24:26.079963 containerd[1614]: time="2025-10-27T08:24:26.079731907Z" level=info msg="StartContainer for \"35afba64198b6f80fe09599ecc3051b2cceea5800c6850c67c8b4974d0919723\"" Oct 27 08:24:26.085746 containerd[1614]: time="2025-10-27T08:24:26.085696244Z" level=info msg="connecting to shim 35afba64198b6f80fe09599ecc3051b2cceea5800c6850c67c8b4974d0919723" address="unix:///run/containerd/s/f85e2b0b014eb4227f6676fe7ef010c3836fc1bb2d61b235919136c1bfd96bb0" protocol=ttrpc version=3 Oct 27 08:24:26.096587 systemd[1]: Started cri-containerd-5b44decb4c554938acca6f29db273fa79d341527632989f131359a2bb766f73a.scope - libcontainer container 5b44decb4c554938acca6f29db273fa79d341527632989f131359a2bb766f73a. Oct 27 08:24:26.098900 systemd[1]: Started cri-containerd-8888dff06e4af728786c52f8b9ced9fd5d5b109d2f7fef9cdf26e8abf64dc3dd.scope - libcontainer container 8888dff06e4af728786c52f8b9ced9fd5d5b109d2f7fef9cdf26e8abf64dc3dd. Oct 27 08:24:26.124467 kubelet[2403]: E1027 08:24:26.124394 2403 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://143.198.224.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.224.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 27 08:24:26.143168 systemd[1]: Started cri-containerd-35afba64198b6f80fe09599ecc3051b2cceea5800c6850c67c8b4974d0919723.scope - libcontainer container 35afba64198b6f80fe09599ecc3051b2cceea5800c6850c67c8b4974d0919723. Oct 27 08:24:26.225089 containerd[1614]: time="2025-10-27T08:24:26.221226061Z" level=info msg="StartContainer for \"8888dff06e4af728786c52f8b9ced9fd5d5b109d2f7fef9cdf26e8abf64dc3dd\" returns successfully" Oct 27 08:24:26.233884 containerd[1614]: time="2025-10-27T08:24:26.233832066Z" level=info msg="StartContainer for \"5b44decb4c554938acca6f29db273fa79d341527632989f131359a2bb766f73a\" returns successfully" Oct 27 08:24:26.284187 kubelet[2403]: E1027 08:24:26.283332 2403 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://143.198.224.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-9999.9.9-k-8ed45c9b51&limit=500&resourceVersion=0\": dial tcp 143.198.224.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 27 08:24:26.285531 containerd[1614]: time="2025-10-27T08:24:26.285375036Z" level=info msg="StartContainer for \"35afba64198b6f80fe09599ecc3051b2cceea5800c6850c67c8b4974d0919723\" returns successfully" Oct 27 08:24:26.754218 kubelet[2403]: I1027 08:24:26.754174 2403 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:27.243908 kubelet[2403]: E1027 08:24:27.243459 2403 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999.9.9-k-8ed45c9b51\" not found" node="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:27.243908 kubelet[2403]: E1027 08:24:27.243617 2403 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:27.248138 kubelet[2403]: E1027 08:24:27.247298 2403 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999.9.9-k-8ed45c9b51\" not found" node="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:27.248813 kubelet[2403]: E1027 08:24:27.248447 2403 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999.9.9-k-8ed45c9b51\" not found" node="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:27.248813 kubelet[2403]: E1027 08:24:27.248729 2403 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:27.248813 kubelet[2403]: E1027 08:24:27.248735 2403 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:28.250623 kubelet[2403]: E1027 08:24:28.250587 2403 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999.9.9-k-8ed45c9b51\" not found" node="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:28.253208 kubelet[2403]: E1027 08:24:28.251223 2403 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:28.253865 kubelet[2403]: E1027 08:24:28.253834 2403 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999.9.9-k-8ed45c9b51\" not found" node="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:28.254155 kubelet[2403]: E1027 08:24:28.254139 2403 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:28.652254 kubelet[2403]: I1027 08:24:28.652192 2403 kubelet_node_status.go:78] "Successfully registered node" node="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:28.652254 kubelet[2403]: E1027 08:24:28.652259 2403 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-9999.9.9-k-8ed45c9b51\": node \"ci-9999.9.9-k-8ed45c9b51\" not found" Oct 27 08:24:28.678330 kubelet[2403]: E1027 08:24:28.678286 2403 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-9999.9.9-k-8ed45c9b51\" not found" Oct 27 08:24:28.772291 kubelet[2403]: I1027 08:24:28.771564 2403 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:28.784931 kubelet[2403]: E1027 08:24:28.784887 2403 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-9999.9.9-k-8ed45c9b51\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:28.785394 kubelet[2403]: I1027 08:24:28.785139 2403 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:28.788652 kubelet[2403]: E1027 08:24:28.788250 2403 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-9999.9.9-k-8ed45c9b51\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:28.788652 kubelet[2403]: I1027 08:24:28.788284 2403 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:28.792010 kubelet[2403]: E1027 08:24:28.791971 2403 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-9999.9.9-k-8ed45c9b51\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:29.138211 kubelet[2403]: I1027 08:24:29.138107 2403 apiserver.go:52] "Watching apiserver" Oct 27 08:24:29.173362 kubelet[2403]: I1027 08:24:29.173294 2403 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 27 08:24:29.250969 kubelet[2403]: I1027 08:24:29.250567 2403 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:29.250969 kubelet[2403]: I1027 08:24:29.250726 2403 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:29.253142 kubelet[2403]: E1027 08:24:29.253087 2403 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-9999.9.9-k-8ed45c9b51\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:29.253385 kubelet[2403]: E1027 08:24:29.253330 2403 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:29.254686 kubelet[2403]: E1027 08:24:29.254623 2403 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-9999.9.9-k-8ed45c9b51\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:29.254879 kubelet[2403]: E1027 08:24:29.254845 2403 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:30.997440 systemd[1]: Reload requested from client PID 2680 ('systemctl') (unit session-7.scope)... Oct 27 08:24:30.997461 systemd[1]: Reloading... Oct 27 08:24:31.117168 zram_generator::config[2728]: No configuration found. Oct 27 08:24:31.413158 systemd[1]: Reloading finished in 415 ms. Oct 27 08:24:31.420989 kubelet[2403]: I1027 08:24:31.420921 2403 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:31.437629 kubelet[2403]: I1027 08:24:31.437490 2403 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 27 08:24:31.438371 kubelet[2403]: E1027 08:24:31.438323 2403 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:31.448474 kubelet[2403]: I1027 08:24:31.448392 2403 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 27 08:24:31.450525 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:24:31.460646 systemd[1]: kubelet.service: Deactivated successfully. Oct 27 08:24:31.460910 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:24:31.460983 systemd[1]: kubelet.service: Consumed 1.074s CPU time, 128.2M memory peak. Oct 27 08:24:31.464828 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:24:31.653379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:24:31.670439 (kubelet)[2775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 27 08:24:31.743615 kubelet[2775]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 08:24:31.744236 kubelet[2775]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 27 08:24:31.744236 kubelet[2775]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 08:24:31.744706 kubelet[2775]: I1027 08:24:31.744582 2775 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 27 08:24:31.761511 kubelet[2775]: I1027 08:24:31.761451 2775 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 27 08:24:31.762311 kubelet[2775]: I1027 08:24:31.761635 2775 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 27 08:24:31.762311 kubelet[2775]: I1027 08:24:31.761996 2775 server.go:956] "Client rotation is on, will bootstrap in background" Oct 27 08:24:31.763844 kubelet[2775]: I1027 08:24:31.763811 2775 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 27 08:24:31.767717 kubelet[2775]: I1027 08:24:31.767404 2775 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 27 08:24:31.776466 kubelet[2775]: I1027 08:24:31.776433 2775 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 27 08:24:31.786542 kubelet[2775]: I1027 08:24:31.786488 2775 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 27 08:24:31.787997 kubelet[2775]: I1027 08:24:31.787221 2775 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 27 08:24:31.787997 kubelet[2775]: I1027 08:24:31.787289 2775 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-9999.9.9-k-8ed45c9b51","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 27 08:24:31.787997 kubelet[2775]: I1027 08:24:31.787722 2775 topology_manager.go:138] "Creating topology manager with none policy" Oct 27 08:24:31.787997 kubelet[2775]: I1027 08:24:31.787738 2775 container_manager_linux.go:303] "Creating device plugin manager" Oct 27 08:24:31.787997 kubelet[2775]: I1027 08:24:31.787849 2775 state_mem.go:36] "Initialized new in-memory state store" Oct 27 08:24:31.788717 kubelet[2775]: I1027 08:24:31.788689 2775 kubelet.go:480] "Attempting to sync node with API server" Oct 27 08:24:31.788857 kubelet[2775]: I1027 08:24:31.788842 2775 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 27 08:24:31.789012 kubelet[2775]: I1027 08:24:31.789001 2775 kubelet.go:386] "Adding apiserver pod source" Oct 27 08:24:31.789078 kubelet[2775]: I1027 08:24:31.789071 2775 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 27 08:24:31.794389 kubelet[2775]: I1027 08:24:31.794328 2775 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 27 08:24:31.796266 kubelet[2775]: I1027 08:24:31.795442 2775 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 27 08:24:31.802617 kubelet[2775]: I1027 08:24:31.802575 2775 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 27 08:24:31.802756 kubelet[2775]: I1027 08:24:31.802649 2775 server.go:1289] "Started kubelet" Oct 27 08:24:31.809820 kubelet[2775]: I1027 08:24:31.809304 2775 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 27 08:24:31.810516 kubelet[2775]: I1027 08:24:31.810483 2775 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 27 08:24:31.810724 kubelet[2775]: I1027 08:24:31.810576 2775 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 27 08:24:31.819204 kubelet[2775]: I1027 08:24:31.818227 2775 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 27 08:24:31.827958 kubelet[2775]: I1027 08:24:31.826777 2775 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 27 08:24:31.831843 kubelet[2775]: I1027 08:24:31.831566 2775 server.go:317] "Adding debug handlers to kubelet server" Oct 27 08:24:31.834522 kubelet[2775]: I1027 08:24:31.834459 2775 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 27 08:24:31.835586 kubelet[2775]: I1027 08:24:31.835547 2775 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 27 08:24:31.835758 kubelet[2775]: I1027 08:24:31.835739 2775 reconciler.go:26] "Reconciler: start to sync state" Oct 27 08:24:31.837469 kubelet[2775]: I1027 08:24:31.837076 2775 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 27 08:24:31.839702 kubelet[2775]: E1027 08:24:31.839491 2775 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 27 08:24:31.844222 kubelet[2775]: I1027 08:24:31.844072 2775 factory.go:223] Registration of the containerd container factory successfully Oct 27 08:24:31.844222 kubelet[2775]: I1027 08:24:31.844097 2775 factory.go:223] Registration of the systemd container factory successfully Oct 27 08:24:31.862914 kubelet[2775]: I1027 08:24:31.862725 2775 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 27 08:24:31.866962 kubelet[2775]: I1027 08:24:31.866441 2775 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 27 08:24:31.866962 kubelet[2775]: I1027 08:24:31.866480 2775 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 27 08:24:31.866962 kubelet[2775]: I1027 08:24:31.866526 2775 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 27 08:24:31.866962 kubelet[2775]: I1027 08:24:31.866537 2775 kubelet.go:2436] "Starting kubelet main sync loop" Oct 27 08:24:31.866962 kubelet[2775]: E1027 08:24:31.866608 2775 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 27 08:24:31.913123 kubelet[2775]: I1027 08:24:31.913085 2775 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 27 08:24:31.913306 kubelet[2775]: I1027 08:24:31.913294 2775 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 27 08:24:31.913476 kubelet[2775]: I1027 08:24:31.913363 2775 state_mem.go:36] "Initialized new in-memory state store" Oct 27 08:24:31.913845 kubelet[2775]: I1027 08:24:31.913824 2775 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 27 08:24:31.913958 kubelet[2775]: I1027 08:24:31.913936 2775 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 27 08:24:31.914012 kubelet[2775]: I1027 08:24:31.914006 2775 policy_none.go:49] "None policy: Start" Oct 27 08:24:31.914062 kubelet[2775]: I1027 08:24:31.914056 2775 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 27 08:24:31.914106 kubelet[2775]: I1027 08:24:31.914100 2775 state_mem.go:35] "Initializing new in-memory state store" Oct 27 08:24:31.914300 kubelet[2775]: I1027 08:24:31.914284 2775 state_mem.go:75] "Updated machine memory state" Oct 27 08:24:31.919821 kubelet[2775]: E1027 08:24:31.919749 2775 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 27 08:24:31.920798 kubelet[2775]: I1027 08:24:31.919977 2775 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 27 08:24:31.920798 kubelet[2775]: I1027 08:24:31.920001 2775 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 27 08:24:31.920798 kubelet[2775]: I1027 08:24:31.920599 2775 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 27 08:24:31.925837 kubelet[2775]: E1027 08:24:31.924045 2775 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 27 08:24:31.968498 kubelet[2775]: I1027 08:24:31.968444 2775 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:31.969009 kubelet[2775]: I1027 08:24:31.968980 2775 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:31.970786 kubelet[2775]: I1027 08:24:31.970438 2775 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:31.980726 kubelet[2775]: I1027 08:24:31.980633 2775 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 27 08:24:31.982737 kubelet[2775]: I1027 08:24:31.982510 2775 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 27 08:24:31.984842 kubelet[2775]: I1027 08:24:31.984813 2775 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 27 08:24:31.984985 kubelet[2775]: E1027 08:24:31.984898 2775 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-9999.9.9-k-8ed45c9b51\" already exists" pod="kube-system/kube-scheduler-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:32.028993 kubelet[2775]: I1027 08:24:32.028942 2775 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:32.036256 kubelet[2775]: I1027 08:24:32.036172 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/658141d1b7ffe3fafa6b8461dea3aac4-kubeconfig\") pod \"kube-controller-manager-ci-9999.9.9-k-8ed45c9b51\" (UID: \"658141d1b7ffe3fafa6b8461dea3aac4\") " pod="kube-system/kube-controller-manager-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:32.036256 kubelet[2775]: I1027 08:24:32.036237 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/658141d1b7ffe3fafa6b8461dea3aac4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-9999.9.9-k-8ed45c9b51\" (UID: \"658141d1b7ffe3fafa6b8461dea3aac4\") " pod="kube-system/kube-controller-manager-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:32.036256 kubelet[2775]: I1027 08:24:32.036270 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4adc42a34e1ea9b384df441b3c4e1776-kubeconfig\") pod \"kube-scheduler-ci-9999.9.9-k-8ed45c9b51\" (UID: \"4adc42a34e1ea9b384df441b3c4e1776\") " pod="kube-system/kube-scheduler-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:32.036476 kubelet[2775]: I1027 08:24:32.036294 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e6b0b77f0401d1cf0c030b0855fc60ae-ca-certs\") pod \"kube-apiserver-ci-9999.9.9-k-8ed45c9b51\" (UID: \"e6b0b77f0401d1cf0c030b0855fc60ae\") " pod="kube-system/kube-apiserver-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:32.036476 kubelet[2775]: I1027 08:24:32.036314 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e6b0b77f0401d1cf0c030b0855fc60ae-k8s-certs\") pod \"kube-apiserver-ci-9999.9.9-k-8ed45c9b51\" (UID: \"e6b0b77f0401d1cf0c030b0855fc60ae\") " pod="kube-system/kube-apiserver-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:32.036476 kubelet[2775]: I1027 08:24:32.036356 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6b0b77f0401d1cf0c030b0855fc60ae-usr-share-ca-certificates\") pod \"kube-apiserver-ci-9999.9.9-k-8ed45c9b51\" (UID: \"e6b0b77f0401d1cf0c030b0855fc60ae\") " pod="kube-system/kube-apiserver-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:32.036476 kubelet[2775]: I1027 08:24:32.036381 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/658141d1b7ffe3fafa6b8461dea3aac4-ca-certs\") pod \"kube-controller-manager-ci-9999.9.9-k-8ed45c9b51\" (UID: \"658141d1b7ffe3fafa6b8461dea3aac4\") " pod="kube-system/kube-controller-manager-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:32.036476 kubelet[2775]: I1027 08:24:32.036407 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/658141d1b7ffe3fafa6b8461dea3aac4-flexvolume-dir\") pod \"kube-controller-manager-ci-9999.9.9-k-8ed45c9b51\" (UID: \"658141d1b7ffe3fafa6b8461dea3aac4\") " pod="kube-system/kube-controller-manager-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:32.036600 kubelet[2775]: I1027 08:24:32.036431 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/658141d1b7ffe3fafa6b8461dea3aac4-k8s-certs\") pod \"kube-controller-manager-ci-9999.9.9-k-8ed45c9b51\" (UID: \"658141d1b7ffe3fafa6b8461dea3aac4\") " pod="kube-system/kube-controller-manager-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:32.047066 kubelet[2775]: I1027 08:24:32.046826 2775 kubelet_node_status.go:124] "Node was previously registered" node="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:32.047066 kubelet[2775]: I1027 08:24:32.047081 2775 kubelet_node_status.go:78] "Successfully registered node" node="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:32.281946 kubelet[2775]: E1027 08:24:32.281418 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:32.283294 kubelet[2775]: E1027 08:24:32.283139 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:32.286643 kubelet[2775]: E1027 08:24:32.285758 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:32.791405 kubelet[2775]: I1027 08:24:32.791299 2775 apiserver.go:52] "Watching apiserver" Oct 27 08:24:32.836746 kubelet[2775]: I1027 08:24:32.836675 2775 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 27 08:24:32.895962 kubelet[2775]: E1027 08:24:32.895916 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:32.901172 kubelet[2775]: I1027 08:24:32.899768 2775 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:32.901894 kubelet[2775]: I1027 08:24:32.901641 2775 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:32.924147 kubelet[2775]: I1027 08:24:32.922273 2775 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 27 08:24:32.924147 kubelet[2775]: E1027 08:24:32.922367 2775 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-9999.9.9-k-8ed45c9b51\" already exists" pod="kube-system/kube-apiserver-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:32.924147 kubelet[2775]: E1027 08:24:32.922627 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:32.939228 kubelet[2775]: I1027 08:24:32.939178 2775 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 27 08:24:32.939397 kubelet[2775]: E1027 08:24:32.939247 2775 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-9999.9.9-k-8ed45c9b51\" already exists" pod="kube-system/kube-controller-manager-ci-9999.9.9-k-8ed45c9b51" Oct 27 08:24:32.941135 kubelet[2775]: E1027 08:24:32.939744 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:32.962770 kubelet[2775]: I1027 08:24:32.962696 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-9999.9.9-k-8ed45c9b51" podStartSLOduration=1.962668625 podStartE2EDuration="1.962668625s" podCreationTimestamp="2025-10-27 08:24:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:24:32.961321897 +0000 UTC m=+1.281272000" watchObservedRunningTime="2025-10-27 08:24:32.962668625 +0000 UTC m=+1.282618728" Oct 27 08:24:32.996290 kubelet[2775]: I1027 08:24:32.996224 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-9999.9.9-k-8ed45c9b51" podStartSLOduration=1.996204994 podStartE2EDuration="1.996204994s" podCreationTimestamp="2025-10-27 08:24:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:24:32.974820632 +0000 UTC m=+1.294770736" watchObservedRunningTime="2025-10-27 08:24:32.996204994 +0000 UTC m=+1.316155091" Oct 27 08:24:33.012976 kubelet[2775]: I1027 08:24:33.012525 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-9999.9.9-k-8ed45c9b51" podStartSLOduration=2.012507521 podStartE2EDuration="2.012507521s" podCreationTimestamp="2025-10-27 08:24:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:24:32.997930597 +0000 UTC m=+1.317880701" watchObservedRunningTime="2025-10-27 08:24:33.012507521 +0000 UTC m=+1.332457624" Oct 27 08:24:33.898535 kubelet[2775]: E1027 08:24:33.898470 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:33.898984 kubelet[2775]: E1027 08:24:33.898959 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:33.899221 kubelet[2775]: E1027 08:24:33.899143 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:36.608730 systemd-resolved[1283]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Oct 27 08:24:37.278033 systemd-timesyncd[1460]: Contacted time server 72.87.88.202:123 (2.flatcar.pool.ntp.org). Oct 27 08:24:37.278054 systemd-resolved[1283]: Clock change detected. Flushing caches. Oct 27 08:24:37.278100 systemd-timesyncd[1460]: Initial clock synchronization to Mon 2025-10-27 08:24:37.277610 UTC. Oct 27 08:24:37.902244 kubelet[2775]: I1027 08:24:37.902162 2775 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 27 08:24:37.903245 containerd[1614]: time="2025-10-27T08:24:37.903141300Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 27 08:24:37.903590 kubelet[2775]: I1027 08:24:37.903384 2775 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 27 08:24:38.914617 systemd[1]: Created slice kubepods-besteffort-podb68d396b_3163_410c_9755_5c91f03fd063.slice - libcontainer container kubepods-besteffort-podb68d396b_3163_410c_9755_5c91f03fd063.slice. Oct 27 08:24:38.967178 kubelet[2775]: I1027 08:24:38.967118 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b68d396b-3163-410c-9755-5c91f03fd063-kube-proxy\") pod \"kube-proxy-5dhvv\" (UID: \"b68d396b-3163-410c-9755-5c91f03fd063\") " pod="kube-system/kube-proxy-5dhvv" Oct 27 08:24:38.967870 kubelet[2775]: I1027 08:24:38.967250 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b68d396b-3163-410c-9755-5c91f03fd063-xtables-lock\") pod \"kube-proxy-5dhvv\" (UID: \"b68d396b-3163-410c-9755-5c91f03fd063\") " pod="kube-system/kube-proxy-5dhvv" Oct 27 08:24:38.967870 kubelet[2775]: I1027 08:24:38.967280 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b68d396b-3163-410c-9755-5c91f03fd063-lib-modules\") pod \"kube-proxy-5dhvv\" (UID: \"b68d396b-3163-410c-9755-5c91f03fd063\") " pod="kube-system/kube-proxy-5dhvv" Oct 27 08:24:38.967870 kubelet[2775]: I1027 08:24:38.967306 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dd4z\" (UniqueName: \"kubernetes.io/projected/b68d396b-3163-410c-9755-5c91f03fd063-kube-api-access-7dd4z\") pod \"kube-proxy-5dhvv\" (UID: \"b68d396b-3163-410c-9755-5c91f03fd063\") " pod="kube-system/kube-proxy-5dhvv" Oct 27 08:24:39.190483 systemd[1]: Created slice kubepods-besteffort-pod7b056875_c6f8_42fb_a016_d0fd614ceab5.slice - libcontainer container kubepods-besteffort-pod7b056875_c6f8_42fb_a016_d0fd614ceab5.slice. Oct 27 08:24:39.226572 kubelet[2775]: E1027 08:24:39.225949 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:39.227264 containerd[1614]: time="2025-10-27T08:24:39.226935928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5dhvv,Uid:b68d396b-3163-410c-9755-5c91f03fd063,Namespace:kube-system,Attempt:0,}" Oct 27 08:24:39.251323 containerd[1614]: time="2025-10-27T08:24:39.251256784Z" level=info msg="connecting to shim 1c3a471494a4013bca5fe52c459fe244482534962a5bfc699f321fb6c4bbcc51" address="unix:///run/containerd/s/3ba0a6a566a3b914c3a7ceadd836cb5941f0e5809dbba436238aaa840f43481e" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:24:39.271218 kubelet[2775]: I1027 08:24:39.270818 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfm28\" (UniqueName: \"kubernetes.io/projected/7b056875-c6f8-42fb-a016-d0fd614ceab5-kube-api-access-vfm28\") pod \"tigera-operator-7dcd859c48-6h9gj\" (UID: \"7b056875-c6f8-42fb-a016-d0fd614ceab5\") " pod="tigera-operator/tigera-operator-7dcd859c48-6h9gj" Oct 27 08:24:39.271218 kubelet[2775]: I1027 08:24:39.270879 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7b056875-c6f8-42fb-a016-d0fd614ceab5-var-lib-calico\") pod \"tigera-operator-7dcd859c48-6h9gj\" (UID: \"7b056875-c6f8-42fb-a016-d0fd614ceab5\") " pod="tigera-operator/tigera-operator-7dcd859c48-6h9gj" Oct 27 08:24:39.294832 systemd[1]: Started cri-containerd-1c3a471494a4013bca5fe52c459fe244482534962a5bfc699f321fb6c4bbcc51.scope - libcontainer container 1c3a471494a4013bca5fe52c459fe244482534962a5bfc699f321fb6c4bbcc51. Oct 27 08:24:39.332630 containerd[1614]: time="2025-10-27T08:24:39.332455253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5dhvv,Uid:b68d396b-3163-410c-9755-5c91f03fd063,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c3a471494a4013bca5fe52c459fe244482534962a5bfc699f321fb6c4bbcc51\"" Oct 27 08:24:39.334108 kubelet[2775]: E1027 08:24:39.334071 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:39.341352 containerd[1614]: time="2025-10-27T08:24:39.341300340Z" level=info msg="CreateContainer within sandbox \"1c3a471494a4013bca5fe52c459fe244482534962a5bfc699f321fb6c4bbcc51\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 27 08:24:39.355796 containerd[1614]: time="2025-10-27T08:24:39.355739348Z" level=info msg="Container 74bd8fca40064e58da66463c4108bcceece46f51bcad290c4648a8d777f8aa43: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:24:39.375747 containerd[1614]: time="2025-10-27T08:24:39.375492791Z" level=info msg="CreateContainer within sandbox \"1c3a471494a4013bca5fe52c459fe244482534962a5bfc699f321fb6c4bbcc51\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"74bd8fca40064e58da66463c4108bcceece46f51bcad290c4648a8d777f8aa43\"" Oct 27 08:24:39.377543 containerd[1614]: time="2025-10-27T08:24:39.376836183Z" level=info msg="StartContainer for \"74bd8fca40064e58da66463c4108bcceece46f51bcad290c4648a8d777f8aa43\"" Oct 27 08:24:39.379144 containerd[1614]: time="2025-10-27T08:24:39.379099039Z" level=info msg="connecting to shim 74bd8fca40064e58da66463c4108bcceece46f51bcad290c4648a8d777f8aa43" address="unix:///run/containerd/s/3ba0a6a566a3b914c3a7ceadd836cb5941f0e5809dbba436238aaa840f43481e" protocol=ttrpc version=3 Oct 27 08:24:39.406871 systemd[1]: Started cri-containerd-74bd8fca40064e58da66463c4108bcceece46f51bcad290c4648a8d777f8aa43.scope - libcontainer container 74bd8fca40064e58da66463c4108bcceece46f51bcad290c4648a8d777f8aa43. Oct 27 08:24:39.466848 containerd[1614]: time="2025-10-27T08:24:39.466731062Z" level=info msg="StartContainer for \"74bd8fca40064e58da66463c4108bcceece46f51bcad290c4648a8d777f8aa43\" returns successfully" Oct 27 08:24:39.496198 containerd[1614]: time="2025-10-27T08:24:39.496144893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-6h9gj,Uid:7b056875-c6f8-42fb-a016-d0fd614ceab5,Namespace:tigera-operator,Attempt:0,}" Oct 27 08:24:39.506827 kubelet[2775]: E1027 08:24:39.506780 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:39.520470 containerd[1614]: time="2025-10-27T08:24:39.520410942Z" level=info msg="connecting to shim 654146aeb601bcb23224fa13783b98857510df74be8d4b7dc41a34bc7368cef2" address="unix:///run/containerd/s/b16d7836705c0115220332531307e3adbd80358a4a5483a8b20bbcb89de9808d" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:24:39.531626 kubelet[2775]: I1027 08:24:39.531040 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5dhvv" podStartSLOduration=1.531013913 podStartE2EDuration="1.531013913s" podCreationTimestamp="2025-10-27 08:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:24:39.53075412 +0000 UTC m=+7.259311950" watchObservedRunningTime="2025-10-27 08:24:39.531013913 +0000 UTC m=+7.259571733" Oct 27 08:24:39.577811 systemd[1]: Started cri-containerd-654146aeb601bcb23224fa13783b98857510df74be8d4b7dc41a34bc7368cef2.scope - libcontainer container 654146aeb601bcb23224fa13783b98857510df74be8d4b7dc41a34bc7368cef2. Oct 27 08:24:39.647780 containerd[1614]: time="2025-10-27T08:24:39.647733923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-6h9gj,Uid:7b056875-c6f8-42fb-a016-d0fd614ceab5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"654146aeb601bcb23224fa13783b98857510df74be8d4b7dc41a34bc7368cef2\"" Oct 27 08:24:39.651821 containerd[1614]: time="2025-10-27T08:24:39.651779079Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 27 08:24:40.441834 kubelet[2775]: E1027 08:24:40.441551 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:40.511488 kubelet[2775]: E1027 08:24:40.511420 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:41.166055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3065616267.mount: Deactivated successfully. Oct 27 08:24:41.920246 kubelet[2775]: E1027 08:24:41.920210 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:42.517268 kubelet[2775]: E1027 08:24:42.516835 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:43.217552 kubelet[2775]: E1027 08:24:43.217189 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:43.522999 kubelet[2775]: E1027 08:24:43.522840 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:44.031346 containerd[1614]: time="2025-10-27T08:24:44.031281543Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:24:44.032175 containerd[1614]: time="2025-10-27T08:24:44.032139920Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 27 08:24:44.033734 containerd[1614]: time="2025-10-27T08:24:44.032611110Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:24:44.034361 containerd[1614]: time="2025-10-27T08:24:44.034329872Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:24:44.035121 containerd[1614]: time="2025-10-27T08:24:44.035068578Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.383250938s" Oct 27 08:24:44.035246 containerd[1614]: time="2025-10-27T08:24:44.035221459Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 27 08:24:44.040444 containerd[1614]: time="2025-10-27T08:24:44.040388329Z" level=info msg="CreateContainer within sandbox \"654146aeb601bcb23224fa13783b98857510df74be8d4b7dc41a34bc7368cef2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 27 08:24:44.048113 containerd[1614]: time="2025-10-27T08:24:44.048062559Z" level=info msg="Container 5f90bec406c437725f89c8bf6f8271b788e6f43fafd0ee5c5febeac192b8aa7f: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:24:44.057011 containerd[1614]: time="2025-10-27T08:24:44.056806946Z" level=info msg="CreateContainer within sandbox \"654146aeb601bcb23224fa13783b98857510df74be8d4b7dc41a34bc7368cef2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5f90bec406c437725f89c8bf6f8271b788e6f43fafd0ee5c5febeac192b8aa7f\"" Oct 27 08:24:44.059537 containerd[1614]: time="2025-10-27T08:24:44.058282372Z" level=info msg="StartContainer for \"5f90bec406c437725f89c8bf6f8271b788e6f43fafd0ee5c5febeac192b8aa7f\"" Oct 27 08:24:44.060932 containerd[1614]: time="2025-10-27T08:24:44.060489831Z" level=info msg="connecting to shim 5f90bec406c437725f89c8bf6f8271b788e6f43fafd0ee5c5febeac192b8aa7f" address="unix:///run/containerd/s/b16d7836705c0115220332531307e3adbd80358a4a5483a8b20bbcb89de9808d" protocol=ttrpc version=3 Oct 27 08:24:44.097973 systemd[1]: Started cri-containerd-5f90bec406c437725f89c8bf6f8271b788e6f43fafd0ee5c5febeac192b8aa7f.scope - libcontainer container 5f90bec406c437725f89c8bf6f8271b788e6f43fafd0ee5c5febeac192b8aa7f. Oct 27 08:24:44.135225 containerd[1614]: time="2025-10-27T08:24:44.135175676Z" level=info msg="StartContainer for \"5f90bec406c437725f89c8bf6f8271b788e6f43fafd0ee5c5febeac192b8aa7f\" returns successfully" Oct 27 08:24:44.839485 update_engine[1575]: I20251027 08:24:44.839326 1575 update_attempter.cc:509] Updating boot flags... Oct 27 08:24:51.088765 sudo[1825]: pam_unix(sudo:session): session closed for user root Oct 27 08:24:51.093541 sshd[1824]: Connection closed by 139.178.89.65 port 34970 Oct 27 08:24:51.095053 sshd-session[1821]: pam_unix(sshd:session): session closed for user core Oct 27 08:24:51.102061 systemd-logind[1572]: Session 7 logged out. Waiting for processes to exit. Oct 27 08:24:51.102860 systemd[1]: sshd@6-143.198.224.48:22-139.178.89.65:34970.service: Deactivated successfully. Oct 27 08:24:51.110429 systemd[1]: session-7.scope: Deactivated successfully. Oct 27 08:24:51.110959 systemd[1]: session-7.scope: Consumed 6.606s CPU time, 156.8M memory peak. Oct 27 08:24:51.115087 systemd-logind[1572]: Removed session 7. Oct 27 08:24:57.313185 kubelet[2775]: I1027 08:24:57.312599 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-6h9gj" podStartSLOduration=13.926631423 podStartE2EDuration="18.312582733s" podCreationTimestamp="2025-10-27 08:24:39 +0000 UTC" firstStartedPulling="2025-10-27 08:24:39.650206067 +0000 UTC m=+7.378763887" lastFinishedPulling="2025-10-27 08:24:44.036157321 +0000 UTC m=+11.764715197" observedRunningTime="2025-10-27 08:24:44.54241077 +0000 UTC m=+12.270968596" watchObservedRunningTime="2025-10-27 08:24:57.312582733 +0000 UTC m=+25.041140558" Oct 27 08:24:57.324755 systemd[1]: Created slice kubepods-besteffort-pod14102fec_1b68_4045_bf21_23745922f07c.slice - libcontainer container kubepods-besteffort-pod14102fec_1b68_4045_bf21_23745922f07c.slice. Oct 27 08:24:57.401696 kubelet[2775]: I1027 08:24:57.401528 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14102fec-1b68-4045-bf21-23745922f07c-tigera-ca-bundle\") pod \"calico-typha-77469d55d7-8z258\" (UID: \"14102fec-1b68-4045-bf21-23745922f07c\") " pod="calico-system/calico-typha-77469d55d7-8z258" Oct 27 08:24:57.401696 kubelet[2775]: I1027 08:24:57.401595 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/14102fec-1b68-4045-bf21-23745922f07c-typha-certs\") pod \"calico-typha-77469d55d7-8z258\" (UID: \"14102fec-1b68-4045-bf21-23745922f07c\") " pod="calico-system/calico-typha-77469d55d7-8z258" Oct 27 08:24:57.401696 kubelet[2775]: I1027 08:24:57.401624 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjc2h\" (UniqueName: \"kubernetes.io/projected/14102fec-1b68-4045-bf21-23745922f07c-kube-api-access-mjc2h\") pod \"calico-typha-77469d55d7-8z258\" (UID: \"14102fec-1b68-4045-bf21-23745922f07c\") " pod="calico-system/calico-typha-77469d55d7-8z258" Oct 27 08:24:57.499223 systemd[1]: Created slice kubepods-besteffort-podd66fd4ca_46b2_4152_9406_60393aa398d7.slice - libcontainer container kubepods-besteffort-podd66fd4ca_46b2_4152_9406_60393aa398d7.slice. Oct 27 08:24:57.602691 kubelet[2775]: I1027 08:24:57.602344 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d66fd4ca-46b2-4152-9406-60393aa398d7-var-run-calico\") pod \"calico-node-q6pzc\" (UID: \"d66fd4ca-46b2-4152-9406-60393aa398d7\") " pod="calico-system/calico-node-q6pzc" Oct 27 08:24:57.602691 kubelet[2775]: I1027 08:24:57.602420 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d66fd4ca-46b2-4152-9406-60393aa398d7-cni-log-dir\") pod \"calico-node-q6pzc\" (UID: \"d66fd4ca-46b2-4152-9406-60393aa398d7\") " pod="calico-system/calico-node-q6pzc" Oct 27 08:24:57.602691 kubelet[2775]: I1027 08:24:57.602452 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d66fd4ca-46b2-4152-9406-60393aa398d7-cni-net-dir\") pod \"calico-node-q6pzc\" (UID: \"d66fd4ca-46b2-4152-9406-60393aa398d7\") " pod="calico-system/calico-node-q6pzc" Oct 27 08:24:57.602691 kubelet[2775]: I1027 08:24:57.602474 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d66fd4ca-46b2-4152-9406-60393aa398d7-var-lib-calico\") pod \"calico-node-q6pzc\" (UID: \"d66fd4ca-46b2-4152-9406-60393aa398d7\") " pod="calico-system/calico-node-q6pzc" Oct 27 08:24:57.603270 kubelet[2775]: I1027 08:24:57.602495 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d66fd4ca-46b2-4152-9406-60393aa398d7-policysync\") pod \"calico-node-q6pzc\" (UID: \"d66fd4ca-46b2-4152-9406-60393aa398d7\") " pod="calico-system/calico-node-q6pzc" Oct 27 08:24:57.603270 kubelet[2775]: I1027 08:24:57.603128 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d66fd4ca-46b2-4152-9406-60393aa398d7-cni-bin-dir\") pod \"calico-node-q6pzc\" (UID: \"d66fd4ca-46b2-4152-9406-60393aa398d7\") " pod="calico-system/calico-node-q6pzc" Oct 27 08:24:57.603270 kubelet[2775]: I1027 08:24:57.603152 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d66fd4ca-46b2-4152-9406-60393aa398d7-lib-modules\") pod \"calico-node-q6pzc\" (UID: \"d66fd4ca-46b2-4152-9406-60393aa398d7\") " pod="calico-system/calico-node-q6pzc" Oct 27 08:24:57.603270 kubelet[2775]: I1027 08:24:57.603172 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d66fd4ca-46b2-4152-9406-60393aa398d7-tigera-ca-bundle\") pod \"calico-node-q6pzc\" (UID: \"d66fd4ca-46b2-4152-9406-60393aa398d7\") " pod="calico-system/calico-node-q6pzc" Oct 27 08:24:57.603270 kubelet[2775]: I1027 08:24:57.603190 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d66fd4ca-46b2-4152-9406-60393aa398d7-xtables-lock\") pod \"calico-node-q6pzc\" (UID: \"d66fd4ca-46b2-4152-9406-60393aa398d7\") " pod="calico-system/calico-node-q6pzc" Oct 27 08:24:57.603454 kubelet[2775]: I1027 08:24:57.603214 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkqvn\" (UniqueName: \"kubernetes.io/projected/d66fd4ca-46b2-4152-9406-60393aa398d7-kube-api-access-vkqvn\") pod \"calico-node-q6pzc\" (UID: \"d66fd4ca-46b2-4152-9406-60393aa398d7\") " pod="calico-system/calico-node-q6pzc" Oct 27 08:24:57.603454 kubelet[2775]: I1027 08:24:57.603237 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d66fd4ca-46b2-4152-9406-60393aa398d7-flexvol-driver-host\") pod \"calico-node-q6pzc\" (UID: \"d66fd4ca-46b2-4152-9406-60393aa398d7\") " pod="calico-system/calico-node-q6pzc" Oct 27 08:24:57.603631 kubelet[2775]: I1027 08:24:57.603566 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d66fd4ca-46b2-4152-9406-60393aa398d7-node-certs\") pod \"calico-node-q6pzc\" (UID: \"d66fd4ca-46b2-4152-9406-60393aa398d7\") " pod="calico-system/calico-node-q6pzc" Oct 27 08:24:57.629624 kubelet[2775]: E1027 08:24:57.628988 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:57.631000 containerd[1614]: time="2025-10-27T08:24:57.630940222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77469d55d7-8z258,Uid:14102fec-1b68-4045-bf21-23745922f07c,Namespace:calico-system,Attempt:0,}" Oct 27 08:24:57.670785 containerd[1614]: time="2025-10-27T08:24:57.670716225Z" level=info msg="connecting to shim 1eb1cbe3ab582d23f4d7ef0f694fb57a26d00a9c61e0e063a130380e6540e2d8" address="unix:///run/containerd/s/25c3b0f8a748503d29f79c8acd66a7873940f4cbbdb7bc56c42c84e23e7aab70" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:24:57.713959 kubelet[2775]: E1027 08:24:57.713920 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.714198 kubelet[2775]: W1027 08:24:57.714170 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.714627 kubelet[2775]: E1027 08:24:57.714607 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.733266 kubelet[2775]: E1027 08:24:57.733223 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.733266 kubelet[2775]: W1027 08:24:57.733258 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.733778 kubelet[2775]: E1027 08:24:57.733284 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.740340 kubelet[2775]: E1027 08:24:57.740149 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.741364 kubelet[2775]: W1027 08:24:57.741251 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.741364 kubelet[2775]: E1027 08:24:57.741289 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.749740 systemd[1]: Started cri-containerd-1eb1cbe3ab582d23f4d7ef0f694fb57a26d00a9c61e0e063a130380e6540e2d8.scope - libcontainer container 1eb1cbe3ab582d23f4d7ef0f694fb57a26d00a9c61e0e063a130380e6540e2d8. Oct 27 08:24:57.802731 kubelet[2775]: E1027 08:24:57.802683 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jn8zm" podUID="9d02f52d-c9e1-4d0e-b6df-042109e24c03" Oct 27 08:24:57.806123 kubelet[2775]: E1027 08:24:57.806072 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:57.807101 containerd[1614]: time="2025-10-27T08:24:57.807028362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q6pzc,Uid:d66fd4ca-46b2-4152-9406-60393aa398d7,Namespace:calico-system,Attempt:0,}" Oct 27 08:24:57.834645 containerd[1614]: time="2025-10-27T08:24:57.834592699Z" level=info msg="connecting to shim d0eee653f043afb187fa721e7c9e0189e8157b98407c49742eeb86265e84dec3" address="unix:///run/containerd/s/0cc0d2781df058cbcb466be5c52ab36e30470fa5879f9270722777244f43bce6" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:24:57.881375 systemd[1]: Started cri-containerd-d0eee653f043afb187fa721e7c9e0189e8157b98407c49742eeb86265e84dec3.scope - libcontainer container d0eee653f043afb187fa721e7c9e0189e8157b98407c49742eeb86265e84dec3. Oct 27 08:24:57.888459 kubelet[2775]: E1027 08:24:57.888316 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.888459 kubelet[2775]: W1027 08:24:57.888344 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.888459 kubelet[2775]: E1027 08:24:57.888368 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.888969 kubelet[2775]: E1027 08:24:57.888816 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.888969 kubelet[2775]: W1027 08:24:57.888843 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.888969 kubelet[2775]: E1027 08:24:57.888863 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.889133 kubelet[2775]: E1027 08:24:57.889123 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.889332 kubelet[2775]: W1027 08:24:57.889167 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.889332 kubelet[2775]: E1027 08:24:57.889183 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.889458 kubelet[2775]: E1027 08:24:57.889448 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.889528 kubelet[2775]: W1027 08:24:57.889519 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.889595 kubelet[2775]: E1027 08:24:57.889586 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.890537 kubelet[2775]: E1027 08:24:57.889882 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.890695 kubelet[2775]: W1027 08:24:57.890634 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.890695 kubelet[2775]: E1027 08:24:57.890651 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.891018 kubelet[2775]: E1027 08:24:57.890955 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.891018 kubelet[2775]: W1027 08:24:57.890966 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.891018 kubelet[2775]: E1027 08:24:57.890975 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.891433 kubelet[2775]: E1027 08:24:57.891332 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.891433 kubelet[2775]: W1027 08:24:57.891347 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.891433 kubelet[2775]: E1027 08:24:57.891357 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.891689 kubelet[2775]: E1027 08:24:57.891678 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.892571 kubelet[2775]: W1027 08:24:57.892552 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.892731 kubelet[2775]: E1027 08:24:57.892630 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.893007 kubelet[2775]: E1027 08:24:57.892945 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.893007 kubelet[2775]: W1027 08:24:57.892956 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.893007 kubelet[2775]: E1027 08:24:57.892966 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.893371 kubelet[2775]: E1027 08:24:57.893301 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.893371 kubelet[2775]: W1027 08:24:57.893312 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.893371 kubelet[2775]: E1027 08:24:57.893322 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.894553 kubelet[2775]: E1027 08:24:57.893651 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.894553 kubelet[2775]: W1027 08:24:57.893661 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.894553 kubelet[2775]: E1027 08:24:57.893670 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.894920 kubelet[2775]: E1027 08:24:57.894865 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.894920 kubelet[2775]: W1027 08:24:57.894878 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.894920 kubelet[2775]: E1027 08:24:57.894889 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.895262 kubelet[2775]: E1027 08:24:57.895221 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.895262 kubelet[2775]: W1027 08:24:57.895231 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.895262 kubelet[2775]: E1027 08:24:57.895242 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.895631 kubelet[2775]: E1027 08:24:57.895553 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.895631 kubelet[2775]: W1027 08:24:57.895564 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.895631 kubelet[2775]: E1027 08:24:57.895573 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.895882 kubelet[2775]: E1027 08:24:57.895859 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.896118 kubelet[2775]: W1027 08:24:57.896106 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.896267 kubelet[2775]: E1027 08:24:57.896174 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.896477 kubelet[2775]: E1027 08:24:57.896359 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.896559 kubelet[2775]: W1027 08:24:57.896549 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.896754 kubelet[2775]: E1027 08:24:57.896738 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.897242 kubelet[2775]: E1027 08:24:57.897044 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.897242 kubelet[2775]: W1027 08:24:57.897057 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.897242 kubelet[2775]: E1027 08:24:57.897067 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.897750 kubelet[2775]: E1027 08:24:57.897608 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.897750 kubelet[2775]: W1027 08:24:57.897622 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.897750 kubelet[2775]: E1027 08:24:57.897633 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.898291 kubelet[2775]: E1027 08:24:57.898013 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.898291 kubelet[2775]: W1027 08:24:57.898024 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.898291 kubelet[2775]: E1027 08:24:57.898033 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.898904 kubelet[2775]: E1027 08:24:57.898634 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.898904 kubelet[2775]: W1027 08:24:57.898645 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.898904 kubelet[2775]: E1027 08:24:57.898656 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.907272 kubelet[2775]: E1027 08:24:57.907240 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.907272 kubelet[2775]: W1027 08:24:57.907263 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.907272 kubelet[2775]: E1027 08:24:57.907284 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.907485 kubelet[2775]: I1027 08:24:57.907315 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d02f52d-c9e1-4d0e-b6df-042109e24c03-kubelet-dir\") pod \"csi-node-driver-jn8zm\" (UID: \"9d02f52d-c9e1-4d0e-b6df-042109e24c03\") " pod="calico-system/csi-node-driver-jn8zm" Oct 27 08:24:57.907998 kubelet[2775]: E1027 08:24:57.907553 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.907998 kubelet[2775]: W1027 08:24:57.907889 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.907998 kubelet[2775]: E1027 08:24:57.907907 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.908594 kubelet[2775]: I1027 08:24:57.908566 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9d02f52d-c9e1-4d0e-b6df-042109e24c03-varrun\") pod \"csi-node-driver-jn8zm\" (UID: \"9d02f52d-c9e1-4d0e-b6df-042109e24c03\") " pod="calico-system/csi-node-driver-jn8zm" Oct 27 08:24:57.909884 kubelet[2775]: E1027 08:24:57.909841 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.909884 kubelet[2775]: W1027 08:24:57.909875 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.909884 kubelet[2775]: E1027 08:24:57.909890 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.911663 kubelet[2775]: E1027 08:24:57.911638 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.911663 kubelet[2775]: W1027 08:24:57.911655 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.911663 kubelet[2775]: E1027 08:24:57.911670 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.911914 kubelet[2775]: E1027 08:24:57.911897 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.911944 kubelet[2775]: W1027 08:24:57.911923 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.911944 kubelet[2775]: E1027 08:24:57.911935 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.912042 kubelet[2775]: I1027 08:24:57.912024 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9d02f52d-c9e1-4d0e-b6df-042109e24c03-registration-dir\") pod \"csi-node-driver-jn8zm\" (UID: \"9d02f52d-c9e1-4d0e-b6df-042109e24c03\") " pod="calico-system/csi-node-driver-jn8zm" Oct 27 08:24:57.912244 kubelet[2775]: E1027 08:24:57.912230 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.912244 kubelet[2775]: W1027 08:24:57.912242 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.912321 kubelet[2775]: E1027 08:24:57.912253 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.912642 kubelet[2775]: E1027 08:24:57.912627 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.912724 kubelet[2775]: W1027 08:24:57.912641 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.912767 kubelet[2775]: E1027 08:24:57.912751 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.913113 kubelet[2775]: E1027 08:24:57.913095 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.913282 kubelet[2775]: W1027 08:24:57.913120 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.913282 kubelet[2775]: E1027 08:24:57.913131 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.913282 kubelet[2775]: I1027 08:24:57.913158 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9d02f52d-c9e1-4d0e-b6df-042109e24c03-socket-dir\") pod \"csi-node-driver-jn8zm\" (UID: \"9d02f52d-c9e1-4d0e-b6df-042109e24c03\") " pod="calico-system/csi-node-driver-jn8zm" Oct 27 08:24:57.913787 kubelet[2775]: E1027 08:24:57.913578 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.913787 kubelet[2775]: W1027 08:24:57.913595 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.913787 kubelet[2775]: E1027 08:24:57.913611 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.914200 kubelet[2775]: E1027 08:24:57.914185 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.914578 kubelet[2775]: W1027 08:24:57.914561 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.914642 kubelet[2775]: E1027 08:24:57.914632 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.915493 kubelet[2775]: E1027 08:24:57.915389 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.915493 kubelet[2775]: W1027 08:24:57.915404 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.915493 kubelet[2775]: E1027 08:24:57.915418 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.915493 kubelet[2775]: I1027 08:24:57.915449 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm584\" (UniqueName: \"kubernetes.io/projected/9d02f52d-c9e1-4d0e-b6df-042109e24c03-kube-api-access-bm584\") pod \"csi-node-driver-jn8zm\" (UID: \"9d02f52d-c9e1-4d0e-b6df-042109e24c03\") " pod="calico-system/csi-node-driver-jn8zm" Oct 27 08:24:57.915752 kubelet[2775]: E1027 08:24:57.915732 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.915752 kubelet[2775]: W1027 08:24:57.915750 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.915821 kubelet[2775]: E1027 08:24:57.915763 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.916114 kubelet[2775]: E1027 08:24:57.916094 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.916114 kubelet[2775]: W1027 08:24:57.916108 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.916487 kubelet[2775]: E1027 08:24:57.916200 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.917225 kubelet[2775]: E1027 08:24:57.917206 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.917284 kubelet[2775]: W1027 08:24:57.917233 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.917284 kubelet[2775]: E1027 08:24:57.917245 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:57.917416 kubelet[2775]: E1027 08:24:57.917398 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:57.917444 kubelet[2775]: W1027 08:24:57.917415 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:57.917542 kubelet[2775]: E1027 08:24:57.917425 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:58.009406 containerd[1614]: time="2025-10-27T08:24:58.009310412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q6pzc,Uid:d66fd4ca-46b2-4152-9406-60393aa398d7,Namespace:calico-system,Attempt:0,} returns sandbox id \"d0eee653f043afb187fa721e7c9e0189e8157b98407c49742eeb86265e84dec3\"" Oct 27 08:24:58.012091 kubelet[2775]: E1027 08:24:58.012011 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:58.014032 containerd[1614]: time="2025-10-27T08:24:58.013963354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 27 08:24:58.018327 kubelet[2775]: E1027 08:24:58.017774 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:58.018327 kubelet[2775]: W1027 08:24:58.017795 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:58.018327 kubelet[2775]: E1027 08:24:58.017832 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:58.018327 kubelet[2775]: E1027 08:24:58.018084 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:58.018327 kubelet[2775]: W1027 08:24:58.018092 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:58.018327 kubelet[2775]: E1027 08:24:58.018102 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:58.018972 kubelet[2775]: E1027 08:24:58.018857 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:58.018972 kubelet[2775]: W1027 08:24:58.018869 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:58.021299 kubelet[2775]: E1027 08:24:58.019558 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:58.021299 kubelet[2775]: E1027 08:24:58.020164 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:58.021299 kubelet[2775]: W1027 08:24:58.020175 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:58.021299 kubelet[2775]: E1027 08:24:58.020190 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:58.021299 kubelet[2775]: E1027 08:24:58.020802 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:58.021299 kubelet[2775]: W1027 08:24:58.020882 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:58.021299 kubelet[2775]: E1027 08:24:58.020904 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:58.021726 kubelet[2775]: E1027 08:24:58.021648 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:58.021726 kubelet[2775]: W1027 08:24:58.021663 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:58.021726 kubelet[2775]: E1027 08:24:58.021679 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:58.021905 kubelet[2775]: E1027 08:24:58.021888 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:58.021936 kubelet[2775]: W1027 08:24:58.021905 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:58.021936 kubelet[2775]: E1027 08:24:58.021919 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:58.024399 kubelet[2775]: E1027 08:24:58.022754 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:58.024399 kubelet[2775]: W1027 08:24:58.022769 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:58.024399 kubelet[2775]: E1027 08:24:58.022781 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:58.024399 kubelet[2775]: E1027 08:24:58.023209 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:58.024399 kubelet[2775]: W1027 08:24:58.023221 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:58.024399 kubelet[2775]: E1027 08:24:58.023236 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:58.024399 kubelet[2775]: E1027 08:24:58.023823 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:58.024399 kubelet[2775]: W1027 08:24:58.023833 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:58.024399 kubelet[2775]: E1027 08:24:58.023844 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:58.024746 kubelet[2775]: E1027 08:24:58.024438 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:58.024746 kubelet[2775]: W1027 08:24:58.024449 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:58.024746 kubelet[2775]: E1027 08:24:58.024461 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:58.025111 kubelet[2775]: E1027 08:24:58.025082 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:58.025111 kubelet[2775]: W1027 08:24:58.025101 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:58.025213 kubelet[2775]: E1027 08:24:58.025116 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:58.027069 kubelet[2775]: E1027 08:24:58.025674 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:58.027069 kubelet[2775]: W1027 08:24:58.025696 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:58.027069 kubelet[2775]: E1027 08:24:58.025712 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:58.027069 kubelet[2775]: E1027 08:24:58.026028 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:58.027069 kubelet[2775]: W1027 08:24:58.026040 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:58.027069 kubelet[2775]: E1027 08:24:58.026051 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:58.027069 kubelet[2775]: E1027 08:24:58.026396 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:58.027069 kubelet[2775]: W1027 08:24:58.026408 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:58.027069 kubelet[2775]: E1027 08:24:58.026424 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:58.027710 kubelet[2775]: E1027 08:24:58.027671 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:58.027710 kubelet[2775]: W1027 08:24:58.027686 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:58.027710 kubelet[2775]: E1027 08:24:58.027698 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:58.027881 kubelet[2775]: E1027 08:24:58.027868 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:58.027881 kubelet[2775]: W1027 08:24:58.027878 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:58.027948 kubelet[2775]: E1027 08:24:58.027886 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:58.030416 kubelet[2775]: E1027 08:24:58.028126 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:58.030416 kubelet[2775]: W1027 08:24:58.028139 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:58.030416 kubelet[2775]: E1027 08:24:58.028148 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:58.030416 kubelet[2775]: E1027 08:24:58.028341 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:58.030416 kubelet[2775]: W1027 08:24:58.028349 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:58.030416 kubelet[2775]: E1027 08:24:58.028358 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:58.030416 kubelet[2775]: E1027 08:24:58.028539 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:58.030416 kubelet[2775]: W1027 08:24:58.028548 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:58.030416 kubelet[2775]: E1027 08:24:58.028555 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:58.030416 kubelet[2775]: E1027 08:24:58.028889 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:58.030827 kubelet[2775]: W1027 08:24:58.028898 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:58.030827 kubelet[2775]: E1027 08:24:58.028908 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:58.030827 kubelet[2775]: E1027 08:24:58.029348 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:58.030827 kubelet[2775]: W1027 08:24:58.029358 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:58.030827 kubelet[2775]: E1027 08:24:58.029368 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:58.030827 kubelet[2775]: E1027 08:24:58.029849 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:58.030827 kubelet[2775]: W1027 08:24:58.029864 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:58.030827 kubelet[2775]: E1027 08:24:58.029875 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:58.031016 kubelet[2775]: E1027 08:24:58.030864 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:58.031016 kubelet[2775]: W1027 08:24:58.030874 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:58.031016 kubelet[2775]: E1027 08:24:58.030885 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:58.033103 kubelet[2775]: E1027 08:24:58.031931 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:58.033103 kubelet[2775]: W1027 08:24:58.031945 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:58.033103 kubelet[2775]: E1027 08:24:58.031956 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:58.071758 containerd[1614]: time="2025-10-27T08:24:58.071711518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77469d55d7-8z258,Uid:14102fec-1b68-4045-bf21-23745922f07c,Namespace:calico-system,Attempt:0,} returns sandbox id \"1eb1cbe3ab582d23f4d7ef0f694fb57a26d00a9c61e0e063a130380e6540e2d8\"" Oct 27 08:24:58.072605 kubelet[2775]: E1027 08:24:58.072560 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:24:58.086904 kubelet[2775]: E1027 08:24:58.086873 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:24:58.087156 kubelet[2775]: W1027 08:24:58.087075 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:24:58.087156 kubelet[2775]: E1027 08:24:58.087102 2775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:24:59.458644 kubelet[2775]: E1027 08:24:59.458562 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jn8zm" podUID="9d02f52d-c9e1-4d0e-b6df-042109e24c03" Oct 27 08:24:59.577502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1241949380.mount: Deactivated successfully. Oct 27 08:25:00.025302 containerd[1614]: time="2025-10-27T08:25:00.023620359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:25:00.025302 containerd[1614]: time="2025-10-27T08:25:00.025227654Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Oct 27 08:25:00.025801 containerd[1614]: time="2025-10-27T08:25:00.025366357Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:25:00.027203 containerd[1614]: time="2025-10-27T08:25:00.027162554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:25:00.027921 containerd[1614]: time="2025-10-27T08:25:00.027801985Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.013210786s" Oct 27 08:25:00.027921 containerd[1614]: time="2025-10-27T08:25:00.027836824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 27 08:25:00.031746 containerd[1614]: time="2025-10-27T08:25:00.031534905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 27 08:25:00.037814 containerd[1614]: time="2025-10-27T08:25:00.037760158Z" level=info msg="CreateContainer within sandbox \"d0eee653f043afb187fa721e7c9e0189e8157b98407c49742eeb86265e84dec3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 27 08:25:00.052885 containerd[1614]: time="2025-10-27T08:25:00.050668873Z" level=info msg="Container 3c69c5a1b557a7e624e019aa29772080d0d2d9927b563495ca87c54e97446211: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:25:00.108577 containerd[1614]: time="2025-10-27T08:25:00.108499474Z" level=info msg="CreateContainer within sandbox \"d0eee653f043afb187fa721e7c9e0189e8157b98407c49742eeb86265e84dec3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3c69c5a1b557a7e624e019aa29772080d0d2d9927b563495ca87c54e97446211\"" Oct 27 08:25:00.109462 containerd[1614]: time="2025-10-27T08:25:00.109431708Z" level=info msg="StartContainer for \"3c69c5a1b557a7e624e019aa29772080d0d2d9927b563495ca87c54e97446211\"" Oct 27 08:25:00.111609 containerd[1614]: time="2025-10-27T08:25:00.111580111Z" level=info msg="connecting to shim 3c69c5a1b557a7e624e019aa29772080d0d2d9927b563495ca87c54e97446211" address="unix:///run/containerd/s/0cc0d2781df058cbcb466be5c52ab36e30470fa5879f9270722777244f43bce6" protocol=ttrpc version=3 Oct 27 08:25:00.160935 systemd[1]: Started cri-containerd-3c69c5a1b557a7e624e019aa29772080d0d2d9927b563495ca87c54e97446211.scope - libcontainer container 3c69c5a1b557a7e624e019aa29772080d0d2d9927b563495ca87c54e97446211. Oct 27 08:25:00.243424 containerd[1614]: time="2025-10-27T08:25:00.243217397Z" level=info msg="StartContainer for \"3c69c5a1b557a7e624e019aa29772080d0d2d9927b563495ca87c54e97446211\" returns successfully" Oct 27 08:25:00.255228 systemd[1]: cri-containerd-3c69c5a1b557a7e624e019aa29772080d0d2d9927b563495ca87c54e97446211.scope: Deactivated successfully. Oct 27 08:25:00.273863 containerd[1614]: time="2025-10-27T08:25:00.273468539Z" level=info msg="received exit event container_id:\"3c69c5a1b557a7e624e019aa29772080d0d2d9927b563495ca87c54e97446211\" id:\"3c69c5a1b557a7e624e019aa29772080d0d2d9927b563495ca87c54e97446211\" pid:3402 exited_at:{seconds:1761553500 nanos:256787145}" Oct 27 08:25:00.302107 containerd[1614]: time="2025-10-27T08:25:00.301873127Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3c69c5a1b557a7e624e019aa29772080d0d2d9927b563495ca87c54e97446211\" id:\"3c69c5a1b557a7e624e019aa29772080d0d2d9927b563495ca87c54e97446211\" pid:3402 exited_at:{seconds:1761553500 nanos:256787145}" Oct 27 08:25:00.523933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c69c5a1b557a7e624e019aa29772080d0d2d9927b563495ca87c54e97446211-rootfs.mount: Deactivated successfully. Oct 27 08:25:00.599177 kubelet[2775]: E1027 08:25:00.599043 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:25:01.458558 kubelet[2775]: E1027 08:25:01.458405 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jn8zm" podUID="9d02f52d-c9e1-4d0e-b6df-042109e24c03" Oct 27 08:25:03.461051 kubelet[2775]: E1027 08:25:03.460497 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jn8zm" podUID="9d02f52d-c9e1-4d0e-b6df-042109e24c03" Oct 27 08:25:04.465649 containerd[1614]: time="2025-10-27T08:25:04.465562353Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:25:04.487374 containerd[1614]: time="2025-10-27T08:25:04.466835047Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Oct 27 08:25:04.487751 containerd[1614]: time="2025-10-27T08:25:04.476336228Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:25:04.488189 containerd[1614]: time="2025-10-27T08:25:04.482496918Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 4.450161828s" Oct 27 08:25:04.488189 containerd[1614]: time="2025-10-27T08:25:04.487895509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 27 08:25:04.489556 containerd[1614]: time="2025-10-27T08:25:04.489467089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:25:04.492475 containerd[1614]: time="2025-10-27T08:25:04.492427208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 27 08:25:04.519346 containerd[1614]: time="2025-10-27T08:25:04.519267131Z" level=info msg="CreateContainer within sandbox \"1eb1cbe3ab582d23f4d7ef0f694fb57a26d00a9c61e0e063a130380e6540e2d8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 27 08:25:04.560553 containerd[1614]: time="2025-10-27T08:25:04.559928184Z" level=info msg="Container fe350376ae9c30a1612b47217a93dd41931e6ecba3909700bd65a74563ab6ee1: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:25:04.565098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount886202301.mount: Deactivated successfully. Oct 27 08:25:04.574097 containerd[1614]: time="2025-10-27T08:25:04.574038393Z" level=info msg="CreateContainer within sandbox \"1eb1cbe3ab582d23f4d7ef0f694fb57a26d00a9c61e0e063a130380e6540e2d8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fe350376ae9c30a1612b47217a93dd41931e6ecba3909700bd65a74563ab6ee1\"" Oct 27 08:25:04.575304 containerd[1614]: time="2025-10-27T08:25:04.575150639Z" level=info msg="StartContainer for \"fe350376ae9c30a1612b47217a93dd41931e6ecba3909700bd65a74563ab6ee1\"" Oct 27 08:25:04.578299 containerd[1614]: time="2025-10-27T08:25:04.578242917Z" level=info msg="connecting to shim fe350376ae9c30a1612b47217a93dd41931e6ecba3909700bd65a74563ab6ee1" address="unix:///run/containerd/s/25c3b0f8a748503d29f79c8acd66a7873940f4cbbdb7bc56c42c84e23e7aab70" protocol=ttrpc version=3 Oct 27 08:25:04.617934 systemd[1]: Started cri-containerd-fe350376ae9c30a1612b47217a93dd41931e6ecba3909700bd65a74563ab6ee1.scope - libcontainer container fe350376ae9c30a1612b47217a93dd41931e6ecba3909700bd65a74563ab6ee1. Oct 27 08:25:04.692558 containerd[1614]: time="2025-10-27T08:25:04.692458855Z" level=info msg="StartContainer for \"fe350376ae9c30a1612b47217a93dd41931e6ecba3909700bd65a74563ab6ee1\" returns successfully" Oct 27 08:25:05.459342 kubelet[2775]: E1027 08:25:05.458870 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jn8zm" podUID="9d02f52d-c9e1-4d0e-b6df-042109e24c03" Oct 27 08:25:05.627554 kubelet[2775]: E1027 08:25:05.625752 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:25:06.628622 kubelet[2775]: I1027 08:25:06.628301 2775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 27 08:25:06.629762 kubelet[2775]: E1027 08:25:06.629652 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:25:07.459200 kubelet[2775]: E1027 08:25:07.459050 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jn8zm" podUID="9d02f52d-c9e1-4d0e-b6df-042109e24c03" Oct 27 08:25:09.226152 containerd[1614]: time="2025-10-27T08:25:09.226048349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:25:09.227704 containerd[1614]: time="2025-10-27T08:25:09.227434750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 27 08:25:09.228702 containerd[1614]: time="2025-10-27T08:25:09.228641880Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:25:09.237717 containerd[1614]: time="2025-10-27T08:25:09.237642753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:25:09.239187 containerd[1614]: time="2025-10-27T08:25:09.238889039Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.746071536s" Oct 27 08:25:09.239187 containerd[1614]: time="2025-10-27T08:25:09.238948271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 27 08:25:09.247940 containerd[1614]: time="2025-10-27T08:25:09.247253424Z" level=info msg="CreateContainer within sandbox \"d0eee653f043afb187fa721e7c9e0189e8157b98407c49742eeb86265e84dec3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 27 08:25:09.273830 containerd[1614]: time="2025-10-27T08:25:09.273004727Z" level=info msg="Container 84f2aa8f55d727cfd3adb766f57f6ff39d86387d6abf3dccc445937c11b65f7f: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:25:09.291779 containerd[1614]: time="2025-10-27T08:25:09.291718980Z" level=info msg="CreateContainer within sandbox \"d0eee653f043afb187fa721e7c9e0189e8157b98407c49742eeb86265e84dec3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"84f2aa8f55d727cfd3adb766f57f6ff39d86387d6abf3dccc445937c11b65f7f\"" Oct 27 08:25:09.294247 containerd[1614]: time="2025-10-27T08:25:09.293024482Z" level=info msg="StartContainer for \"84f2aa8f55d727cfd3adb766f57f6ff39d86387d6abf3dccc445937c11b65f7f\"" Oct 27 08:25:09.299041 containerd[1614]: time="2025-10-27T08:25:09.298967244Z" level=info msg="connecting to shim 84f2aa8f55d727cfd3adb766f57f6ff39d86387d6abf3dccc445937c11b65f7f" address="unix:///run/containerd/s/0cc0d2781df058cbcb466be5c52ab36e30470fa5879f9270722777244f43bce6" protocol=ttrpc version=3 Oct 27 08:25:09.336951 systemd[1]: Started cri-containerd-84f2aa8f55d727cfd3adb766f57f6ff39d86387d6abf3dccc445937c11b65f7f.scope - libcontainer container 84f2aa8f55d727cfd3adb766f57f6ff39d86387d6abf3dccc445937c11b65f7f. Oct 27 08:25:09.417159 containerd[1614]: time="2025-10-27T08:25:09.417067642Z" level=info msg="StartContainer for \"84f2aa8f55d727cfd3adb766f57f6ff39d86387d6abf3dccc445937c11b65f7f\" returns successfully" Oct 27 08:25:09.460246 kubelet[2775]: E1027 08:25:09.458784 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jn8zm" podUID="9d02f52d-c9e1-4d0e-b6df-042109e24c03" Oct 27 08:25:09.651576 kubelet[2775]: E1027 08:25:09.650804 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:25:09.679751 kubelet[2775]: I1027 08:25:09.679568 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77469d55d7-8z258" podStartSLOduration=6.26588454 podStartE2EDuration="12.679485323s" podCreationTimestamp="2025-10-27 08:24:57 +0000 UTC" firstStartedPulling="2025-10-27 08:24:58.076821324 +0000 UTC m=+25.805379149" lastFinishedPulling="2025-10-27 08:25:04.49042212 +0000 UTC m=+32.218979932" observedRunningTime="2025-10-27 08:25:05.645475892 +0000 UTC m=+33.374033713" watchObservedRunningTime="2025-10-27 08:25:09.679485323 +0000 UTC m=+37.408043150" Oct 27 08:25:10.191601 systemd[1]: cri-containerd-84f2aa8f55d727cfd3adb766f57f6ff39d86387d6abf3dccc445937c11b65f7f.scope: Deactivated successfully. Oct 27 08:25:10.192048 systemd[1]: cri-containerd-84f2aa8f55d727cfd3adb766f57f6ff39d86387d6abf3dccc445937c11b65f7f.scope: Consumed 711ms CPU time, 167.2M memory peak, 12.8M read from disk, 171.3M written to disk. Oct 27 08:25:10.205451 containerd[1614]: time="2025-10-27T08:25:10.205340354Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84f2aa8f55d727cfd3adb766f57f6ff39d86387d6abf3dccc445937c11b65f7f\" id:\"84f2aa8f55d727cfd3adb766f57f6ff39d86387d6abf3dccc445937c11b65f7f\" pid:3504 exited_at:{seconds:1761553510 nanos:204786316}" Oct 27 08:25:10.205928 containerd[1614]: time="2025-10-27T08:25:10.205618913Z" level=info msg="received exit event container_id:\"84f2aa8f55d727cfd3adb766f57f6ff39d86387d6abf3dccc445937c11b65f7f\" id:\"84f2aa8f55d727cfd3adb766f57f6ff39d86387d6abf3dccc445937c11b65f7f\" pid:3504 exited_at:{seconds:1761553510 nanos:204786316}" Oct 27 08:25:10.266010 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84f2aa8f55d727cfd3adb766f57f6ff39d86387d6abf3dccc445937c11b65f7f-rootfs.mount: Deactivated successfully. Oct 27 08:25:10.299352 kubelet[2775]: I1027 08:25:10.299289 2775 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 27 08:25:10.397671 systemd[1]: Created slice kubepods-burstable-pod4b90e5aa_be4c_4511_9d4a_30c3f10ad641.slice - libcontainer container kubepods-burstable-pod4b90e5aa_be4c_4511_9d4a_30c3f10ad641.slice. Oct 27 08:25:10.418168 systemd[1]: Created slice kubepods-besteffort-pod3d26e3cf_e6a3_4346_be05_bc637815bb23.slice - libcontainer container kubepods-besteffort-pod3d26e3cf_e6a3_4346_be05_bc637815bb23.slice. Oct 27 08:25:10.428841 kubelet[2775]: I1027 08:25:10.427976 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d26e3cf-e6a3-4346-be05-bc637815bb23-config\") pod \"goldmane-666569f655-x85mb\" (UID: \"3d26e3cf-e6a3-4346-be05-bc637815bb23\") " pod="calico-system/goldmane-666569f655-x85mb" Oct 27 08:25:10.428841 kubelet[2775]: I1027 08:25:10.428014 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlm9g\" (UniqueName: \"kubernetes.io/projected/24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa-kube-api-access-hlm9g\") pod \"whisker-6d96f4ddf8-kzctp\" (UID: \"24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa\") " pod="calico-system/whisker-6d96f4ddf8-kzctp" Oct 27 08:25:10.428841 kubelet[2775]: I1027 08:25:10.428056 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d26e3cf-e6a3-4346-be05-bc637815bb23-goldmane-ca-bundle\") pod \"goldmane-666569f655-x85mb\" (UID: \"3d26e3cf-e6a3-4346-be05-bc637815bb23\") " pod="calico-system/goldmane-666569f655-x85mb" Oct 27 08:25:10.428841 kubelet[2775]: I1027 08:25:10.428072 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3d26e3cf-e6a3-4346-be05-bc637815bb23-goldmane-key-pair\") pod \"goldmane-666569f655-x85mb\" (UID: \"3d26e3cf-e6a3-4346-be05-bc637815bb23\") " pod="calico-system/goldmane-666569f655-x85mb" Oct 27 08:25:10.428841 kubelet[2775]: I1027 08:25:10.428091 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqr2h\" (UniqueName: \"kubernetes.io/projected/4b90e5aa-be4c-4511-9d4a-30c3f10ad641-kube-api-access-jqr2h\") pod \"coredns-674b8bbfcf-t6j5w\" (UID: \"4b90e5aa-be4c-4511-9d4a-30c3f10ad641\") " pod="kube-system/coredns-674b8bbfcf-t6j5w" Oct 27 08:25:10.428639 systemd[1]: Created slice kubepods-besteffort-pod606a9dd0_b52b_437b_ae5f_d5d2e13b6421.slice - libcontainer container kubepods-besteffort-pod606a9dd0_b52b_437b_ae5f_d5d2e13b6421.slice. Oct 27 08:25:10.429589 kubelet[2775]: I1027 08:25:10.428122 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/606a9dd0-b52b-437b-ae5f-d5d2e13b6421-calico-apiserver-certs\") pod \"calico-apiserver-9d8b896f8-z6l9g\" (UID: \"606a9dd0-b52b-437b-ae5f-d5d2e13b6421\") " pod="calico-apiserver/calico-apiserver-9d8b896f8-z6l9g" Oct 27 08:25:10.429589 kubelet[2775]: I1027 08:25:10.428140 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa-whisker-backend-key-pair\") pod \"whisker-6d96f4ddf8-kzctp\" (UID: \"24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa\") " pod="calico-system/whisker-6d96f4ddf8-kzctp" Oct 27 08:25:10.429589 kubelet[2775]: I1027 08:25:10.428161 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9slwc\" (UniqueName: \"kubernetes.io/projected/606a9dd0-b52b-437b-ae5f-d5d2e13b6421-kube-api-access-9slwc\") pod \"calico-apiserver-9d8b896f8-z6l9g\" (UID: \"606a9dd0-b52b-437b-ae5f-d5d2e13b6421\") " pod="calico-apiserver/calico-apiserver-9d8b896f8-z6l9g" Oct 27 08:25:10.429589 kubelet[2775]: I1027 08:25:10.428176 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa-whisker-ca-bundle\") pod \"whisker-6d96f4ddf8-kzctp\" (UID: \"24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa\") " pod="calico-system/whisker-6d96f4ddf8-kzctp" Oct 27 08:25:10.429589 kubelet[2775]: I1027 08:25:10.428201 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8378213a-6035-4955-9239-ea2d03ab7a24-config-volume\") pod \"coredns-674b8bbfcf-mg2vx\" (UID: \"8378213a-6035-4955-9239-ea2d03ab7a24\") " pod="kube-system/coredns-674b8bbfcf-mg2vx" Oct 27 08:25:10.429834 kubelet[2775]: I1027 08:25:10.428216 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x8x2\" (UniqueName: \"kubernetes.io/projected/8378213a-6035-4955-9239-ea2d03ab7a24-kube-api-access-5x8x2\") pod \"coredns-674b8bbfcf-mg2vx\" (UID: \"8378213a-6035-4955-9239-ea2d03ab7a24\") " pod="kube-system/coredns-674b8bbfcf-mg2vx" Oct 27 08:25:10.429834 kubelet[2775]: I1027 08:25:10.428231 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b90e5aa-be4c-4511-9d4a-30c3f10ad641-config-volume\") pod \"coredns-674b8bbfcf-t6j5w\" (UID: \"4b90e5aa-be4c-4511-9d4a-30c3f10ad641\") " pod="kube-system/coredns-674b8bbfcf-t6j5w" Oct 27 08:25:10.429834 kubelet[2775]: I1027 08:25:10.428248 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkflg\" (UniqueName: \"kubernetes.io/projected/3d26e3cf-e6a3-4346-be05-bc637815bb23-kube-api-access-vkflg\") pod \"goldmane-666569f655-x85mb\" (UID: \"3d26e3cf-e6a3-4346-be05-bc637815bb23\") " pod="calico-system/goldmane-666569f655-x85mb" Oct 27 08:25:10.438134 systemd[1]: Created slice kubepods-besteffort-pod24115dc0_b4e8_4e8e_aab1_f31fbd0cf6fa.slice - libcontainer container kubepods-besteffort-pod24115dc0_b4e8_4e8e_aab1_f31fbd0cf6fa.slice. Oct 27 08:25:10.454631 systemd[1]: Created slice kubepods-besteffort-poddf40acd6_6199_4dea_8b25_0040183349ca.slice - libcontainer container kubepods-besteffort-poddf40acd6_6199_4dea_8b25_0040183349ca.slice. Oct 27 08:25:10.469895 systemd[1]: Created slice kubepods-besteffort-pod0ff60fd1_1efa_4af7_a609_59d81e9c7a0f.slice - libcontainer container kubepods-besteffort-pod0ff60fd1_1efa_4af7_a609_59d81e9c7a0f.slice. Oct 27 08:25:10.499158 systemd[1]: Created slice kubepods-burstable-pod8378213a_6035_4955_9239_ea2d03ab7a24.slice - libcontainer container kubepods-burstable-pod8378213a_6035_4955_9239_ea2d03ab7a24.slice. Oct 27 08:25:10.529792 kubelet[2775]: I1027 08:25:10.529729 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df40acd6-6199-4dea-8b25-0040183349ca-tigera-ca-bundle\") pod \"calico-kube-controllers-664fbc97f9-cb2jp\" (UID: \"df40acd6-6199-4dea-8b25-0040183349ca\") " pod="calico-system/calico-kube-controllers-664fbc97f9-cb2jp" Oct 27 08:25:10.534094 kubelet[2775]: I1027 08:25:10.532738 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbz25\" (UniqueName: \"kubernetes.io/projected/df40acd6-6199-4dea-8b25-0040183349ca-kube-api-access-tbz25\") pod \"calico-kube-controllers-664fbc97f9-cb2jp\" (UID: \"df40acd6-6199-4dea-8b25-0040183349ca\") " pod="calico-system/calico-kube-controllers-664fbc97f9-cb2jp" Oct 27 08:25:10.534533 kubelet[2775]: I1027 08:25:10.534336 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb4zh\" (UniqueName: \"kubernetes.io/projected/0ff60fd1-1efa-4af7-a609-59d81e9c7a0f-kube-api-access-mb4zh\") pod \"calico-apiserver-9d8b896f8-4w9rr\" (UID: \"0ff60fd1-1efa-4af7-a609-59d81e9c7a0f\") " pod="calico-apiserver/calico-apiserver-9d8b896f8-4w9rr" Oct 27 08:25:10.535634 kubelet[2775]: I1027 08:25:10.535603 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0ff60fd1-1efa-4af7-a609-59d81e9c7a0f-calico-apiserver-certs\") pod \"calico-apiserver-9d8b896f8-4w9rr\" (UID: \"0ff60fd1-1efa-4af7-a609-59d81e9c7a0f\") " pod="calico-apiserver/calico-apiserver-9d8b896f8-4w9rr" Oct 27 08:25:10.679316 kubelet[2775]: E1027 08:25:10.679267 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:25:10.699032 containerd[1614]: time="2025-10-27T08:25:10.698932413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 27 08:25:10.715042 kubelet[2775]: E1027 08:25:10.714790 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:25:10.717257 containerd[1614]: time="2025-10-27T08:25:10.717201172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t6j5w,Uid:4b90e5aa-be4c-4511-9d4a-30c3f10ad641,Namespace:kube-system,Attempt:0,}" Oct 27 08:25:10.726719 containerd[1614]: time="2025-10-27T08:25:10.726641932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x85mb,Uid:3d26e3cf-e6a3-4346-be05-bc637815bb23,Namespace:calico-system,Attempt:0,}" Oct 27 08:25:10.734195 containerd[1614]: time="2025-10-27T08:25:10.734139542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d8b896f8-z6l9g,Uid:606a9dd0-b52b-437b-ae5f-d5d2e13b6421,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:25:10.775023 containerd[1614]: time="2025-10-27T08:25:10.774962695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-664fbc97f9-cb2jp,Uid:df40acd6-6199-4dea-8b25-0040183349ca,Namespace:calico-system,Attempt:0,}" Oct 27 08:25:10.778033 containerd[1614]: time="2025-10-27T08:25:10.777952929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d96f4ddf8-kzctp,Uid:24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa,Namespace:calico-system,Attempt:0,}" Oct 27 08:25:10.791438 containerd[1614]: time="2025-10-27T08:25:10.790951789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d8b896f8-4w9rr,Uid:0ff60fd1-1efa-4af7-a609-59d81e9c7a0f,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:25:10.807430 kubelet[2775]: E1027 08:25:10.806913 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:25:10.812722 containerd[1614]: time="2025-10-27T08:25:10.812415443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mg2vx,Uid:8378213a-6035-4955-9239-ea2d03ab7a24,Namespace:kube-system,Attempt:0,}" Oct 27 08:25:11.076290 containerd[1614]: time="2025-10-27T08:25:11.076145478Z" level=error msg="Failed to destroy network for sandbox \"e8cca86404a78a29e8e6a8f93649080ac84ad3a0285c041d56e899075b8f2f5d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:25:11.081965 containerd[1614]: time="2025-10-27T08:25:11.081657840Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d8b896f8-4w9rr,Uid:0ff60fd1-1efa-4af7-a609-59d81e9c7a0f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8cca86404a78a29e8e6a8f93649080ac84ad3a0285c041d56e899075b8f2f5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:25:11.086099 containerd[1614]: time="2025-10-27T08:25:11.086046663Z" level=error msg="Failed to destroy network for sandbox \"2657491d31641853ccfb73388399a08d6153b895c7df1f3f24f9c5f2b5cfd9ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:25:11.086766 kubelet[2775]: E1027 08:25:11.086677 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8cca86404a78a29e8e6a8f93649080ac84ad3a0285c041d56e899075b8f2f5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:25:11.086949 kubelet[2775]: E1027 08:25:11.086788 2775 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8cca86404a78a29e8e6a8f93649080ac84ad3a0285c041d56e899075b8f2f5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9d8b896f8-4w9rr" Oct 27 08:25:11.086949 kubelet[2775]: E1027 08:25:11.086833 2775 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8cca86404a78a29e8e6a8f93649080ac84ad3a0285c041d56e899075b8f2f5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9d8b896f8-4w9rr" Oct 27 08:25:11.086949 kubelet[2775]: E1027 08:25:11.086928 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9d8b896f8-4w9rr_calico-apiserver(0ff60fd1-1efa-4af7-a609-59d81e9c7a0f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9d8b896f8-4w9rr_calico-apiserver(0ff60fd1-1efa-4af7-a609-59d81e9c7a0f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8cca86404a78a29e8e6a8f93649080ac84ad3a0285c041d56e899075b8f2f5d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9d8b896f8-4w9rr" podUID="0ff60fd1-1efa-4af7-a609-59d81e9c7a0f" Oct 27 08:25:11.090752 containerd[1614]: time="2025-10-27T08:25:11.090612604Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-664fbc97f9-cb2jp,Uid:df40acd6-6199-4dea-8b25-0040183349ca,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2657491d31641853ccfb73388399a08d6153b895c7df1f3f24f9c5f2b5cfd9ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:25:11.091383 kubelet[2775]: E1027 08:25:11.090923 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2657491d31641853ccfb73388399a08d6153b895c7df1f3f24f9c5f2b5cfd9ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:25:11.091383 kubelet[2775]: E1027 08:25:11.091069 2775 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2657491d31641853ccfb73388399a08d6153b895c7df1f3f24f9c5f2b5cfd9ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-664fbc97f9-cb2jp" Oct 27 08:25:11.091383 kubelet[2775]: E1027 08:25:11.091137 2775 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2657491d31641853ccfb73388399a08d6153b895c7df1f3f24f9c5f2b5cfd9ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-664fbc97f9-cb2jp" Oct 27 08:25:11.092678 kubelet[2775]: E1027 08:25:11.091212 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-664fbc97f9-cb2jp_calico-system(df40acd6-6199-4dea-8b25-0040183349ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-664fbc97f9-cb2jp_calico-system(df40acd6-6199-4dea-8b25-0040183349ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2657491d31641853ccfb73388399a08d6153b895c7df1f3f24f9c5f2b5cfd9ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-664fbc97f9-cb2jp" podUID="df40acd6-6199-4dea-8b25-0040183349ca" Oct 27 08:25:11.125380 containerd[1614]: time="2025-10-27T08:25:11.125303739Z" level=error msg="Failed to destroy network for sandbox \"d39b8c9e7049df27cc860cb287ca5ea7a5b0024d5ff69cedac95715635211855\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:25:11.137692 containerd[1614]: time="2025-10-27T08:25:11.137625055Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mg2vx,Uid:8378213a-6035-4955-9239-ea2d03ab7a24,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d39b8c9e7049df27cc860cb287ca5ea7a5b0024d5ff69cedac95715635211855\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:25:11.138275 kubelet[2775]: E1027 08:25:11.138170 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d39b8c9e7049df27cc860cb287ca5ea7a5b0024d5ff69cedac95715635211855\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:25:11.138275 kubelet[2775]: E1027 08:25:11.138242 2775 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d39b8c9e7049df27cc860cb287ca5ea7a5b0024d5ff69cedac95715635211855\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mg2vx" Oct 27 08:25:11.139651 kubelet[2775]: E1027 08:25:11.138311 2775 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d39b8c9e7049df27cc860cb287ca5ea7a5b0024d5ff69cedac95715635211855\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mg2vx" Oct 27 08:25:11.139651 kubelet[2775]: E1027 08:25:11.138376 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-mg2vx_kube-system(8378213a-6035-4955-9239-ea2d03ab7a24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-mg2vx_kube-system(8378213a-6035-4955-9239-ea2d03ab7a24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d39b8c9e7049df27cc860cb287ca5ea7a5b0024d5ff69cedac95715635211855\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mg2vx" podUID="8378213a-6035-4955-9239-ea2d03ab7a24" Oct 27 08:25:11.155800 containerd[1614]: time="2025-10-27T08:25:11.155669792Z" level=error msg="Failed to destroy network for sandbox \"0aa9a49829b047c51e45fa690db937ffc38644c261e5e972cb63a7eb0bdbc51c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:25:11.159169 containerd[1614]: time="2025-10-27T08:25:11.159108556Z" level=error msg="Failed to destroy network for sandbox \"40d2163d75e8e279150431a0cd50693c44ad91930bc1791c2be0ae7c821b101e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:25:11.159974 containerd[1614]: time="2025-10-27T08:25:11.159748168Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d8b896f8-z6l9g,Uid:606a9dd0-b52b-437b-ae5f-d5d2e13b6421,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aa9a49829b047c51e45fa690db937ffc38644c261e5e972cb63a7eb0bdbc51c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:25:11.160313 kubelet[2775]: E1027 08:25:11.160252 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aa9a49829b047c51e45fa690db937ffc38644c261e5e972cb63a7eb0bdbc51c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:25:11.160480 kubelet[2775]: E1027 08:25:11.160340 2775 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aa9a49829b047c51e45fa690db937ffc38644c261e5e972cb63a7eb0bdbc51c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9d8b896f8-z6l9g" Oct 27 08:25:11.160480 kubelet[2775]: E1027 08:25:11.160374 2775 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aa9a49829b047c51e45fa690db937ffc38644c261e5e972cb63a7eb0bdbc51c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9d8b896f8-z6l9g" Oct 27 08:25:11.160480 kubelet[2775]: E1027 08:25:11.160451 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9d8b896f8-z6l9g_calico-apiserver(606a9dd0-b52b-437b-ae5f-d5d2e13b6421)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9d8b896f8-z6l9g_calico-apiserver(606a9dd0-b52b-437b-ae5f-d5d2e13b6421)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0aa9a49829b047c51e45fa690db937ffc38644c261e5e972cb63a7eb0bdbc51c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9d8b896f8-z6l9g" podUID="606a9dd0-b52b-437b-ae5f-d5d2e13b6421" Oct 27 08:25:11.162636 kubelet[2775]: E1027 08:25:11.162170 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40d2163d75e8e279150431a0cd50693c44ad91930bc1791c2be0ae7c821b101e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:25:11.162707 containerd[1614]: time="2025-10-27T08:25:11.160958309Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t6j5w,Uid:4b90e5aa-be4c-4511-9d4a-30c3f10ad641,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"40d2163d75e8e279150431a0cd50693c44ad91930bc1791c2be0ae7c821b101e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:25:11.163758 kubelet[2775]: E1027 08:25:11.162334 2775 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40d2163d75e8e279150431a0cd50693c44ad91930bc1791c2be0ae7c821b101e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-t6j5w" Oct 27 08:25:11.163758 kubelet[2775]: E1027 08:25:11.163010 2775 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40d2163d75e8e279150431a0cd50693c44ad91930bc1791c2be0ae7c821b101e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-t6j5w" Oct 27 08:25:11.164115 kubelet[2775]: E1027 08:25:11.163966 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-t6j5w_kube-system(4b90e5aa-be4c-4511-9d4a-30c3f10ad641)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-t6j5w_kube-system(4b90e5aa-be4c-4511-9d4a-30c3f10ad641)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40d2163d75e8e279150431a0cd50693c44ad91930bc1791c2be0ae7c821b101e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-t6j5w" podUID="4b90e5aa-be4c-4511-9d4a-30c3f10ad641" Oct 27 08:25:11.182313 containerd[1614]: time="2025-10-27T08:25:11.182144988Z" level=error msg="Failed to destroy network for sandbox \"2ea24d8e986b2304d14ce685b0249f09d2842740ef8f6c0f0dc628be24d7eaa9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:25:11.184425 containerd[1614]: time="2025-10-27T08:25:11.184294624Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d96f4ddf8-kzctp,Uid:24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ea24d8e986b2304d14ce685b0249f09d2842740ef8f6c0f0dc628be24d7eaa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:25:11.186162 containerd[1614]: time="2025-10-27T08:25:11.185933563Z" level=error msg="Failed to destroy network for sandbox \"064c453f7987a36045e5c06dfd3e43f5f2ada5c95f638dadf2b343a6a5d5a11f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:25:11.186253 kubelet[2775]: E1027 08:25:11.184641 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ea24d8e986b2304d14ce685b0249f09d2842740ef8f6c0f0dc628be24d7eaa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:25:11.186253 kubelet[2775]: E1027 08:25:11.184713 2775 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ea24d8e986b2304d14ce685b0249f09d2842740ef8f6c0f0dc628be24d7eaa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d96f4ddf8-kzctp" Oct 27 08:25:11.186253 kubelet[2775]: E1027 08:25:11.184736 2775 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ea24d8e986b2304d14ce685b0249f09d2842740ef8f6c0f0dc628be24d7eaa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d96f4ddf8-kzctp" Oct 27 08:25:11.186425 kubelet[2775]: E1027 08:25:11.184817 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d96f4ddf8-kzctp_calico-system(24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d96f4ddf8-kzctp_calico-system(24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ea24d8e986b2304d14ce685b0249f09d2842740ef8f6c0f0dc628be24d7eaa9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d96f4ddf8-kzctp" podUID="24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa" Oct 27 08:25:11.188584 containerd[1614]: time="2025-10-27T08:25:11.186804666Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x85mb,Uid:3d26e3cf-e6a3-4346-be05-bc637815bb23,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"064c453f7987a36045e5c06dfd3e43f5f2ada5c95f638dadf2b343a6a5d5a11f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:25:11.188835 kubelet[2775]: E1027 08:25:11.187353 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"064c453f7987a36045e5c06dfd3e43f5f2ada5c95f638dadf2b343a6a5d5a11f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:25:11.188835 kubelet[2775]: E1027 08:25:11.187630 2775 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"064c453f7987a36045e5c06dfd3e43f5f2ada5c95f638dadf2b343a6a5d5a11f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-x85mb" Oct 27 08:25:11.188835 kubelet[2775]: E1027 08:25:11.188241 2775 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"064c453f7987a36045e5c06dfd3e43f5f2ada5c95f638dadf2b343a6a5d5a11f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-x85mb" Oct 27 08:25:11.188994 kubelet[2775]: E1027 08:25:11.188699 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-x85mb_calico-system(3d26e3cf-e6a3-4346-be05-bc637815bb23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-x85mb_calico-system(3d26e3cf-e6a3-4346-be05-bc637815bb23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"064c453f7987a36045e5c06dfd3e43f5f2ada5c95f638dadf2b343a6a5d5a11f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-x85mb" podUID="3d26e3cf-e6a3-4346-be05-bc637815bb23" Oct 27 08:25:11.475209 systemd[1]: Created slice kubepods-besteffort-pod9d02f52d_c9e1_4d0e_b6df_042109e24c03.slice - libcontainer container kubepods-besteffort-pod9d02f52d_c9e1_4d0e_b6df_042109e24c03.slice. Oct 27 08:25:11.481825 containerd[1614]: time="2025-10-27T08:25:11.481771426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jn8zm,Uid:9d02f52d-c9e1-4d0e-b6df-042109e24c03,Namespace:calico-system,Attempt:0,}" Oct 27 08:25:11.560560 containerd[1614]: time="2025-10-27T08:25:11.559809321Z" level=error msg="Failed to destroy network for sandbox \"002a3aa805b843ede2e0854ee4df56d42e1e07940faebb23011c309e1c03d237\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:25:11.561374 containerd[1614]: time="2025-10-27T08:25:11.561326324Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jn8zm,Uid:9d02f52d-c9e1-4d0e-b6df-042109e24c03,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"002a3aa805b843ede2e0854ee4df56d42e1e07940faebb23011c309e1c03d237\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:25:11.562970 kubelet[2775]: E1027 08:25:11.562641 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"002a3aa805b843ede2e0854ee4df56d42e1e07940faebb23011c309e1c03d237\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:25:11.562970 kubelet[2775]: E1027 08:25:11.562856 2775 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"002a3aa805b843ede2e0854ee4df56d42e1e07940faebb23011c309e1c03d237\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jn8zm" Oct 27 08:25:11.562970 kubelet[2775]: E1027 08:25:11.562904 2775 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"002a3aa805b843ede2e0854ee4df56d42e1e07940faebb23011c309e1c03d237\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jn8zm" Oct 27 08:25:11.565591 kubelet[2775]: E1027 08:25:11.563443 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jn8zm_calico-system(9d02f52d-c9e1-4d0e-b6df-042109e24c03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jn8zm_calico-system(9d02f52d-c9e1-4d0e-b6df-042109e24c03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"002a3aa805b843ede2e0854ee4df56d42e1e07940faebb23011c309e1c03d237\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jn8zm" podUID="9d02f52d-c9e1-4d0e-b6df-042109e24c03" Oct 27 08:25:11.575944 systemd[1]: run-netns-cni\x2d589512b0\x2df146\x2deb1b\x2dec0e\x2d40d6fd5b2568.mount: Deactivated successfully. Oct 27 08:25:18.013777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2330246861.mount: Deactivated successfully. Oct 27 08:25:18.066830 containerd[1614]: time="2025-10-27T08:25:18.054442962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:25:18.076937 containerd[1614]: time="2025-10-27T08:25:18.075599179Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 27 08:25:18.080251 containerd[1614]: time="2025-10-27T08:25:18.080191433Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:25:18.082203 containerd[1614]: time="2025-10-27T08:25:18.082145837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:25:18.083572 containerd[1614]: time="2025-10-27T08:25:18.083189964Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.381824055s" Oct 27 08:25:18.083572 containerd[1614]: time="2025-10-27T08:25:18.083231025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 27 08:25:18.121916 containerd[1614]: time="2025-10-27T08:25:18.121867961Z" level=info msg="CreateContainer within sandbox \"d0eee653f043afb187fa721e7c9e0189e8157b98407c49742eeb86265e84dec3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 27 08:25:18.157877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1439117951.mount: Deactivated successfully. Oct 27 08:25:18.158173 containerd[1614]: time="2025-10-27T08:25:18.158073375Z" level=info msg="Container 797ff5e8f73494a4378f808e979b3d4ba294ad42c2660365c173a58a96df4166: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:25:18.187252 containerd[1614]: time="2025-10-27T08:25:18.187180318Z" level=info msg="CreateContainer within sandbox \"d0eee653f043afb187fa721e7c9e0189e8157b98407c49742eeb86265e84dec3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"797ff5e8f73494a4378f808e979b3d4ba294ad42c2660365c173a58a96df4166\"" Oct 27 08:25:18.189665 containerd[1614]: time="2025-10-27T08:25:18.188183903Z" level=info msg="StartContainer for \"797ff5e8f73494a4378f808e979b3d4ba294ad42c2660365c173a58a96df4166\"" Oct 27 08:25:18.193942 containerd[1614]: time="2025-10-27T08:25:18.193896168Z" level=info msg="connecting to shim 797ff5e8f73494a4378f808e979b3d4ba294ad42c2660365c173a58a96df4166" address="unix:///run/containerd/s/0cc0d2781df058cbcb466be5c52ab36e30470fa5879f9270722777244f43bce6" protocol=ttrpc version=3 Oct 27 08:25:18.346049 systemd[1]: Started cri-containerd-797ff5e8f73494a4378f808e979b3d4ba294ad42c2660365c173a58a96df4166.scope - libcontainer container 797ff5e8f73494a4378f808e979b3d4ba294ad42c2660365c173a58a96df4166. Oct 27 08:25:18.415640 containerd[1614]: time="2025-10-27T08:25:18.415500001Z" level=info msg="StartContainer for \"797ff5e8f73494a4378f808e979b3d4ba294ad42c2660365c173a58a96df4166\" returns successfully" Oct 27 08:25:18.560180 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 27 08:25:18.561853 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 27 08:25:18.734777 kubelet[2775]: E1027 08:25:18.734717 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:25:18.862958 kubelet[2775]: I1027 08:25:18.860883 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-q6pzc" podStartSLOduration=1.7850207139999998 podStartE2EDuration="21.860860308s" podCreationTimestamp="2025-10-27 08:24:57 +0000 UTC" firstStartedPulling="2025-10-27 08:24:58.01353139 +0000 UTC m=+25.742089207" lastFinishedPulling="2025-10-27 08:25:18.089370998 +0000 UTC m=+45.817928801" observedRunningTime="2025-10-27 08:25:18.790822832 +0000 UTC m=+46.519380658" watchObservedRunningTime="2025-10-27 08:25:18.860860308 +0000 UTC m=+46.589418133" Oct 27 08:25:18.999418 kubelet[2775]: I1027 08:25:18.999240 2775 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa-whisker-backend-key-pair\") pod \"24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa\" (UID: \"24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa\") " Oct 27 08:25:18.999418 kubelet[2775]: I1027 08:25:18.999298 2775 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlm9g\" (UniqueName: \"kubernetes.io/projected/24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa-kube-api-access-hlm9g\") pod \"24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa\" (UID: \"24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa\") " Oct 27 08:25:18.999418 kubelet[2775]: I1027 08:25:18.999316 2775 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa-whisker-ca-bundle\") pod \"24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa\" (UID: \"24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa\") " Oct 27 08:25:19.003258 kubelet[2775]: I1027 08:25:19.003182 2775 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa" (UID: "24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 27 08:25:19.007591 kubelet[2775]: I1027 08:25:19.007165 2775 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa" (UID: "24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 27 08:25:19.009320 kubelet[2775]: I1027 08:25:19.009248 2775 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa-kube-api-access-hlm9g" (OuterVolumeSpecName: "kube-api-access-hlm9g") pod "24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa" (UID: "24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa"). InnerVolumeSpecName "kube-api-access-hlm9g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 27 08:25:19.017090 systemd[1]: var-lib-kubelet-pods-24115dc0\x2db4e8\x2d4e8e\x2daab1\x2df31fbd0cf6fa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhlm9g.mount: Deactivated successfully. Oct 27 08:25:19.018092 systemd[1]: var-lib-kubelet-pods-24115dc0\x2db4e8\x2d4e8e\x2daab1\x2df31fbd0cf6fa-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 27 08:25:19.099740 kubelet[2775]: I1027 08:25:19.099659 2775 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa-whisker-backend-key-pair\") on node \"ci-9999.9.9-k-8ed45c9b51\" DevicePath \"\"" Oct 27 08:25:19.099740 kubelet[2775]: I1027 08:25:19.099699 2775 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hlm9g\" (UniqueName: \"kubernetes.io/projected/24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa-kube-api-access-hlm9g\") on node \"ci-9999.9.9-k-8ed45c9b51\" DevicePath \"\"" Oct 27 08:25:19.099740 kubelet[2775]: I1027 08:25:19.099711 2775 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa-whisker-ca-bundle\") on node \"ci-9999.9.9-k-8ed45c9b51\" DevicePath \"\"" Oct 27 08:25:19.737534 kubelet[2775]: E1027 08:25:19.736317 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:25:19.750592 systemd[1]: Removed slice kubepods-besteffort-pod24115dc0_b4e8_4e8e_aab1_f31fbd0cf6fa.slice - libcontainer container kubepods-besteffort-pod24115dc0_b4e8_4e8e_aab1_f31fbd0cf6fa.slice. Oct 27 08:25:19.879226 systemd[1]: Created slice kubepods-besteffort-podd9ea689c_020a_4c9b_8879_a266b7c116e3.slice - libcontainer container kubepods-besteffort-podd9ea689c_020a_4c9b_8879_a266b7c116e3.slice. Oct 27 08:25:19.985449 containerd[1614]: time="2025-10-27T08:25:19.985390509Z" level=info msg="TaskExit event in podsandbox handler container_id:\"797ff5e8f73494a4378f808e979b3d4ba294ad42c2660365c173a58a96df4166\" id:\"6ffc15e793d07e3c4b89d2ed3cee7414a1462aa3c04d85ebea838670cb5dbf6b\" pid:3841 exit_status:1 exited_at:{seconds:1761553519 nanos:984595008}" Oct 27 08:25:20.006510 kubelet[2775]: I1027 08:25:20.006191 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d9ea689c-020a-4c9b-8879-a266b7c116e3-whisker-backend-key-pair\") pod \"whisker-5b7d4c49f6-pwfq6\" (UID: \"d9ea689c-020a-4c9b-8879-a266b7c116e3\") " pod="calico-system/whisker-5b7d4c49f6-pwfq6" Oct 27 08:25:20.007310 kubelet[2775]: I1027 08:25:20.006875 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9ea689c-020a-4c9b-8879-a266b7c116e3-whisker-ca-bundle\") pod \"whisker-5b7d4c49f6-pwfq6\" (UID: \"d9ea689c-020a-4c9b-8879-a266b7c116e3\") " pod="calico-system/whisker-5b7d4c49f6-pwfq6" Oct 27 08:25:20.007310 kubelet[2775]: I1027 08:25:20.007245 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rm6b\" (UniqueName: \"kubernetes.io/projected/d9ea689c-020a-4c9b-8879-a266b7c116e3-kube-api-access-6rm6b\") pod \"whisker-5b7d4c49f6-pwfq6\" (UID: \"d9ea689c-020a-4c9b-8879-a266b7c116e3\") " pod="calico-system/whisker-5b7d4c49f6-pwfq6" Oct 27 08:25:20.186445 containerd[1614]: time="2025-10-27T08:25:20.186382872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b7d4c49f6-pwfq6,Uid:d9ea689c-020a-4c9b-8879-a266b7c116e3,Namespace:calico-system,Attempt:0,}" Oct 27 08:25:20.465271 kubelet[2775]: I1027 08:25:20.464732 2775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa" path="/var/lib/kubelet/pods/24115dc0-b4e8-4e8e-aab1-f31fbd0cf6fa/volumes" Oct 27 08:25:20.595856 systemd-networkd[1482]: cali6d11958ce5c: Link UP Oct 27 08:25:20.597039 systemd-networkd[1482]: cali6d11958ce5c: Gained carrier Oct 27 08:25:20.635574 containerd[1614]: 2025-10-27 08:25:20.265 [INFO][3883] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 27 08:25:20.635574 containerd[1614]: 2025-10-27 08:25:20.309 [INFO][3883] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999.9.9--k--8ed45c9b51-k8s-whisker--5b7d4c49f6--pwfq6-eth0 whisker-5b7d4c49f6- calico-system d9ea689c-020a-4c9b-8879-a266b7c116e3 925 0 2025-10-27 08:25:19 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5b7d4c49f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-9999.9.9-k-8ed45c9b51 whisker-5b7d4c49f6-pwfq6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6d11958ce5c [] [] }} ContainerID="fee6848e35cde403a545342b6e6f6ab096c51eddbbea8163529b44554ba65726" Namespace="calico-system" Pod="whisker-5b7d4c49f6-pwfq6" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-whisker--5b7d4c49f6--pwfq6-" Oct 27 08:25:20.635574 containerd[1614]: 2025-10-27 08:25:20.309 [INFO][3883] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fee6848e35cde403a545342b6e6f6ab096c51eddbbea8163529b44554ba65726" Namespace="calico-system" Pod="whisker-5b7d4c49f6-pwfq6" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-whisker--5b7d4c49f6--pwfq6-eth0" Oct 27 08:25:20.635574 containerd[1614]: 2025-10-27 08:25:20.487 [INFO][3934] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fee6848e35cde403a545342b6e6f6ab096c51eddbbea8163529b44554ba65726" HandleID="k8s-pod-network.fee6848e35cde403a545342b6e6f6ab096c51eddbbea8163529b44554ba65726" Workload="ci--9999.9.9--k--8ed45c9b51-k8s-whisker--5b7d4c49f6--pwfq6-eth0" Oct 27 08:25:20.636958 containerd[1614]: 2025-10-27 08:25:20.492 [INFO][3934] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fee6848e35cde403a545342b6e6f6ab096c51eddbbea8163529b44554ba65726" HandleID="k8s-pod-network.fee6848e35cde403a545342b6e6f6ab096c51eddbbea8163529b44554ba65726" Workload="ci--9999.9.9--k--8ed45c9b51-k8s-whisker--5b7d4c49f6--pwfq6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f4340), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-9999.9.9-k-8ed45c9b51", "pod":"whisker-5b7d4c49f6-pwfq6", "timestamp":"2025-10-27 08:25:20.487394992 +0000 UTC"}, Hostname:"ci-9999.9.9-k-8ed45c9b51", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:25:20.636958 containerd[1614]: 2025-10-27 08:25:20.492 [INFO][3934] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:25:20.636958 containerd[1614]: 2025-10-27 08:25:20.492 [INFO][3934] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:25:20.636958 containerd[1614]: 2025-10-27 08:25:20.492 [INFO][3934] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999.9.9-k-8ed45c9b51' Oct 27 08:25:20.636958 containerd[1614]: 2025-10-27 08:25:20.511 [INFO][3934] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fee6848e35cde403a545342b6e6f6ab096c51eddbbea8163529b44554ba65726" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:20.636958 containerd[1614]: 2025-10-27 08:25:20.523 [INFO][3934] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:20.636958 containerd[1614]: 2025-10-27 08:25:20.532 [INFO][3934] ipam/ipam.go 511: Trying affinity for 192.168.113.192/26 host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:20.636958 containerd[1614]: 2025-10-27 08:25:20.535 [INFO][3934] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.192/26 host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:20.636958 containerd[1614]: 2025-10-27 08:25:20.540 [INFO][3934] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.192/26 host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:20.637333 containerd[1614]: 2025-10-27 08:25:20.541 [INFO][3934] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.192/26 handle="k8s-pod-network.fee6848e35cde403a545342b6e6f6ab096c51eddbbea8163529b44554ba65726" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:20.637333 containerd[1614]: 2025-10-27 08:25:20.543 [INFO][3934] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fee6848e35cde403a545342b6e6f6ab096c51eddbbea8163529b44554ba65726 Oct 27 08:25:20.637333 containerd[1614]: 2025-10-27 08:25:20.551 [INFO][3934] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.192/26 handle="k8s-pod-network.fee6848e35cde403a545342b6e6f6ab096c51eddbbea8163529b44554ba65726" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:20.637333 containerd[1614]: 2025-10-27 08:25:20.562 [INFO][3934] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.193/26] block=192.168.113.192/26 handle="k8s-pod-network.fee6848e35cde403a545342b6e6f6ab096c51eddbbea8163529b44554ba65726" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:20.637333 containerd[1614]: 2025-10-27 08:25:20.562 [INFO][3934] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.193/26] handle="k8s-pod-network.fee6848e35cde403a545342b6e6f6ab096c51eddbbea8163529b44554ba65726" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:20.637333 containerd[1614]: 2025-10-27 08:25:20.562 [INFO][3934] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:25:20.637333 containerd[1614]: 2025-10-27 08:25:20.562 [INFO][3934] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.193/26] IPv6=[] ContainerID="fee6848e35cde403a545342b6e6f6ab096c51eddbbea8163529b44554ba65726" HandleID="k8s-pod-network.fee6848e35cde403a545342b6e6f6ab096c51eddbbea8163529b44554ba65726" Workload="ci--9999.9.9--k--8ed45c9b51-k8s-whisker--5b7d4c49f6--pwfq6-eth0" Oct 27 08:25:20.641348 containerd[1614]: 2025-10-27 08:25:20.568 [INFO][3883] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fee6848e35cde403a545342b6e6f6ab096c51eddbbea8163529b44554ba65726" Namespace="calico-system" Pod="whisker-5b7d4c49f6-pwfq6" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-whisker--5b7d4c49f6--pwfq6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--k--8ed45c9b51-k8s-whisker--5b7d4c49f6--pwfq6-eth0", GenerateName:"whisker-5b7d4c49f6-", Namespace:"calico-system", SelfLink:"", UID:"d9ea689c-020a-4c9b-8879-a266b7c116e3", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 25, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b7d4c49f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-k-8ed45c9b51", ContainerID:"", Pod:"whisker-5b7d4c49f6-pwfq6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.113.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6d11958ce5c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:25:20.641348 containerd[1614]: 2025-10-27 08:25:20.569 [INFO][3883] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.193/32] ContainerID="fee6848e35cde403a545342b6e6f6ab096c51eddbbea8163529b44554ba65726" Namespace="calico-system" Pod="whisker-5b7d4c49f6-pwfq6" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-whisker--5b7d4c49f6--pwfq6-eth0" Oct 27 08:25:20.641572 containerd[1614]: 2025-10-27 08:25:20.569 [INFO][3883] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d11958ce5c ContainerID="fee6848e35cde403a545342b6e6f6ab096c51eddbbea8163529b44554ba65726" Namespace="calico-system" Pod="whisker-5b7d4c49f6-pwfq6" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-whisker--5b7d4c49f6--pwfq6-eth0" Oct 27 08:25:20.641572 containerd[1614]: 2025-10-27 08:25:20.598 [INFO][3883] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fee6848e35cde403a545342b6e6f6ab096c51eddbbea8163529b44554ba65726" Namespace="calico-system" Pod="whisker-5b7d4c49f6-pwfq6" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-whisker--5b7d4c49f6--pwfq6-eth0" Oct 27 08:25:20.641632 containerd[1614]: 2025-10-27 08:25:20.600 [INFO][3883] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fee6848e35cde403a545342b6e6f6ab096c51eddbbea8163529b44554ba65726" Namespace="calico-system" Pod="whisker-5b7d4c49f6-pwfq6" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-whisker--5b7d4c49f6--pwfq6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--k--8ed45c9b51-k8s-whisker--5b7d4c49f6--pwfq6-eth0", GenerateName:"whisker-5b7d4c49f6-", Namespace:"calico-system", SelfLink:"", UID:"d9ea689c-020a-4c9b-8879-a266b7c116e3", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 25, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b7d4c49f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-k-8ed45c9b51", ContainerID:"fee6848e35cde403a545342b6e6f6ab096c51eddbbea8163529b44554ba65726", Pod:"whisker-5b7d4c49f6-pwfq6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.113.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6d11958ce5c", MAC:"8a:4e:3f:f2:f9:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:25:20.641696 containerd[1614]: 2025-10-27 08:25:20.618 [INFO][3883] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fee6848e35cde403a545342b6e6f6ab096c51eddbbea8163529b44554ba65726" Namespace="calico-system" Pod="whisker-5b7d4c49f6-pwfq6" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-whisker--5b7d4c49f6--pwfq6-eth0" Oct 27 08:25:20.742215 kubelet[2775]: E1027 08:25:20.741440 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:25:20.840765 containerd[1614]: time="2025-10-27T08:25:20.840639622Z" level=info msg="connecting to shim fee6848e35cde403a545342b6e6f6ab096c51eddbbea8163529b44554ba65726" address="unix:///run/containerd/s/0e3907e7b4077f20c17caf0d7b97e007b7f8580f6c26e5cec418b9e34d9de4f4" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:25:20.903601 systemd[1]: Started cri-containerd-fee6848e35cde403a545342b6e6f6ab096c51eddbbea8163529b44554ba65726.scope - libcontainer container fee6848e35cde403a545342b6e6f6ab096c51eddbbea8163529b44554ba65726. Oct 27 08:25:20.966326 containerd[1614]: time="2025-10-27T08:25:20.966274738Z" level=info msg="TaskExit event in podsandbox handler container_id:\"797ff5e8f73494a4378f808e979b3d4ba294ad42c2660365c173a58a96df4166\" id:\"0671e236c0efe7765f6616f1d3f8f760bb8d06ce7efec9e5754f914bf56fa7f5\" pid:3985 exit_status:1 exited_at:{seconds:1761553520 nanos:965758342}" Oct 27 08:25:21.000666 containerd[1614]: time="2025-10-27T08:25:21.000094833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b7d4c49f6-pwfq6,Uid:d9ea689c-020a-4c9b-8879-a266b7c116e3,Namespace:calico-system,Attempt:0,} returns sandbox id \"fee6848e35cde403a545342b6e6f6ab096c51eddbbea8163529b44554ba65726\"" Oct 27 08:25:21.004903 containerd[1614]: time="2025-10-27T08:25:21.004852304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 08:25:21.337289 containerd[1614]: time="2025-10-27T08:25:21.336817719Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:25:21.338054 containerd[1614]: time="2025-10-27T08:25:21.337972183Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 08:25:21.338193 containerd[1614]: time="2025-10-27T08:25:21.338082124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 27 08:25:21.338526 kubelet[2775]: E1027 08:25:21.338466 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:25:21.338618 kubelet[2775]: E1027 08:25:21.338549 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:25:21.341858 kubelet[2775]: E1027 08:25:21.341790 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fe777ef95cb44bc29c73f3d9909f5975,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6rm6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b7d4c49f6-pwfq6_calico-system(d9ea689c-020a-4c9b-8879-a266b7c116e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 08:25:21.345713 containerd[1614]: time="2025-10-27T08:25:21.345662491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 08:25:21.682228 containerd[1614]: time="2025-10-27T08:25:21.681691796Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:25:21.683051 containerd[1614]: time="2025-10-27T08:25:21.682980765Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 08:25:21.683051 containerd[1614]: time="2025-10-27T08:25:21.683019952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 27 08:25:21.683633 kubelet[2775]: E1027 08:25:21.683477 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:25:21.683633 kubelet[2775]: E1027 08:25:21.683601 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:25:21.684056 kubelet[2775]: E1027 08:25:21.683972 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6rm6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b7d4c49f6-pwfq6_calico-system(d9ea689c-020a-4c9b-8879-a266b7c116e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 08:25:21.685532 kubelet[2775]: E1027 08:25:21.685461 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b7d4c49f6-pwfq6" podUID="d9ea689c-020a-4c9b-8879-a266b7c116e3" Oct 27 08:25:21.730786 systemd-networkd[1482]: cali6d11958ce5c: Gained IPv6LL Oct 27 08:25:21.747742 kubelet[2775]: E1027 08:25:21.747411 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b7d4c49f6-pwfq6" podUID="d9ea689c-020a-4c9b-8879-a266b7c116e3" Oct 27 08:25:22.460365 kubelet[2775]: E1027 08:25:22.460080 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:25:22.462446 containerd[1614]: time="2025-10-27T08:25:22.461598607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d8b896f8-4w9rr,Uid:0ff60fd1-1efa-4af7-a609-59d81e9c7a0f,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:25:22.462446 containerd[1614]: time="2025-10-27T08:25:22.462065930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t6j5w,Uid:4b90e5aa-be4c-4511-9d4a-30c3f10ad641,Namespace:kube-system,Attempt:0,}" Oct 27 08:25:22.650214 systemd-networkd[1482]: caliccfbfc2c7ee: Link UP Oct 27 08:25:22.650502 systemd-networkd[1482]: caliccfbfc2c7ee: Gained carrier Oct 27 08:25:22.671431 containerd[1614]: 2025-10-27 08:25:22.507 [INFO][4069] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 27 08:25:22.671431 containerd[1614]: 2025-10-27 08:25:22.537 [INFO][4069] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999.9.9--k--8ed45c9b51-k8s-calico--apiserver--9d8b896f8--4w9rr-eth0 calico-apiserver-9d8b896f8- calico-apiserver 0ff60fd1-1efa-4af7-a609-59d81e9c7a0f 852 0 2025-10-27 08:24:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9d8b896f8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-9999.9.9-k-8ed45c9b51 calico-apiserver-9d8b896f8-4w9rr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliccfbfc2c7ee [] [] }} ContainerID="56a9d692ca3faaaecffb66879aa5bb6b9a40d21c1dbd4b0047d42a2bda0f4732" Namespace="calico-apiserver" Pod="calico-apiserver-9d8b896f8-4w9rr" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-calico--apiserver--9d8b896f8--4w9rr-" Oct 27 08:25:22.671431 containerd[1614]: 2025-10-27 08:25:22.537 [INFO][4069] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="56a9d692ca3faaaecffb66879aa5bb6b9a40d21c1dbd4b0047d42a2bda0f4732" Namespace="calico-apiserver" Pod="calico-apiserver-9d8b896f8-4w9rr" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-calico--apiserver--9d8b896f8--4w9rr-eth0" Oct 27 08:25:22.671431 containerd[1614]: 2025-10-27 08:25:22.581 [INFO][4094] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="56a9d692ca3faaaecffb66879aa5bb6b9a40d21c1dbd4b0047d42a2bda0f4732" HandleID="k8s-pod-network.56a9d692ca3faaaecffb66879aa5bb6b9a40d21c1dbd4b0047d42a2bda0f4732" Workload="ci--9999.9.9--k--8ed45c9b51-k8s-calico--apiserver--9d8b896f8--4w9rr-eth0" Oct 27 08:25:22.671773 containerd[1614]: 2025-10-27 08:25:22.581 [INFO][4094] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="56a9d692ca3faaaecffb66879aa5bb6b9a40d21c1dbd4b0047d42a2bda0f4732" HandleID="k8s-pod-network.56a9d692ca3faaaecffb66879aa5bb6b9a40d21c1dbd4b0047d42a2bda0f4732" Workload="ci--9999.9.9--k--8ed45c9b51-k8s-calico--apiserver--9d8b896f8--4w9rr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5000), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-9999.9.9-k-8ed45c9b51", "pod":"calico-apiserver-9d8b896f8-4w9rr", "timestamp":"2025-10-27 08:25:22.581220149 +0000 UTC"}, Hostname:"ci-9999.9.9-k-8ed45c9b51", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:25:22.671773 containerd[1614]: 2025-10-27 08:25:22.581 [INFO][4094] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:25:22.671773 containerd[1614]: 2025-10-27 08:25:22.581 [INFO][4094] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:25:22.671773 containerd[1614]: 2025-10-27 08:25:22.581 [INFO][4094] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999.9.9-k-8ed45c9b51' Oct 27 08:25:22.671773 containerd[1614]: 2025-10-27 08:25:22.593 [INFO][4094] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.56a9d692ca3faaaecffb66879aa5bb6b9a40d21c1dbd4b0047d42a2bda0f4732" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:22.671773 containerd[1614]: 2025-10-27 08:25:22.599 [INFO][4094] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:22.671773 containerd[1614]: 2025-10-27 08:25:22.606 [INFO][4094] ipam/ipam.go 511: Trying affinity for 192.168.113.192/26 host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:22.671773 containerd[1614]: 2025-10-27 08:25:22.609 [INFO][4094] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.192/26 host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:22.671773 containerd[1614]: 2025-10-27 08:25:22.615 [INFO][4094] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.192/26 host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:22.672018 containerd[1614]: 2025-10-27 08:25:22.615 [INFO][4094] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.192/26 handle="k8s-pod-network.56a9d692ca3faaaecffb66879aa5bb6b9a40d21c1dbd4b0047d42a2bda0f4732" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:22.672018 containerd[1614]: 2025-10-27 08:25:22.618 [INFO][4094] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.56a9d692ca3faaaecffb66879aa5bb6b9a40d21c1dbd4b0047d42a2bda0f4732 Oct 27 08:25:22.672018 containerd[1614]: 2025-10-27 08:25:22.625 [INFO][4094] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.192/26 handle="k8s-pod-network.56a9d692ca3faaaecffb66879aa5bb6b9a40d21c1dbd4b0047d42a2bda0f4732" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:22.672018 containerd[1614]: 2025-10-27 08:25:22.635 [INFO][4094] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.194/26] block=192.168.113.192/26 handle="k8s-pod-network.56a9d692ca3faaaecffb66879aa5bb6b9a40d21c1dbd4b0047d42a2bda0f4732" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:22.672018 containerd[1614]: 2025-10-27 08:25:22.635 [INFO][4094] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.194/26] handle="k8s-pod-network.56a9d692ca3faaaecffb66879aa5bb6b9a40d21c1dbd4b0047d42a2bda0f4732" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:22.672018 containerd[1614]: 2025-10-27 08:25:22.635 [INFO][4094] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:25:22.672018 containerd[1614]: 2025-10-27 08:25:22.635 [INFO][4094] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.194/26] IPv6=[] ContainerID="56a9d692ca3faaaecffb66879aa5bb6b9a40d21c1dbd4b0047d42a2bda0f4732" HandleID="k8s-pod-network.56a9d692ca3faaaecffb66879aa5bb6b9a40d21c1dbd4b0047d42a2bda0f4732" Workload="ci--9999.9.9--k--8ed45c9b51-k8s-calico--apiserver--9d8b896f8--4w9rr-eth0" Oct 27 08:25:22.672178 containerd[1614]: 2025-10-27 08:25:22.643 [INFO][4069] cni-plugin/k8s.go 418: Populated endpoint ContainerID="56a9d692ca3faaaecffb66879aa5bb6b9a40d21c1dbd4b0047d42a2bda0f4732" Namespace="calico-apiserver" Pod="calico-apiserver-9d8b896f8-4w9rr" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-calico--apiserver--9d8b896f8--4w9rr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--k--8ed45c9b51-k8s-calico--apiserver--9d8b896f8--4w9rr-eth0", GenerateName:"calico-apiserver-9d8b896f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ff60fd1-1efa-4af7-a609-59d81e9c7a0f", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9d8b896f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-k-8ed45c9b51", ContainerID:"", Pod:"calico-apiserver-9d8b896f8-4w9rr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliccfbfc2c7ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:25:22.672700 containerd[1614]: 2025-10-27 08:25:22.643 [INFO][4069] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.194/32] ContainerID="56a9d692ca3faaaecffb66879aa5bb6b9a40d21c1dbd4b0047d42a2bda0f4732" Namespace="calico-apiserver" Pod="calico-apiserver-9d8b896f8-4w9rr" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-calico--apiserver--9d8b896f8--4w9rr-eth0" Oct 27 08:25:22.672700 containerd[1614]: 2025-10-27 08:25:22.643 [INFO][4069] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliccfbfc2c7ee ContainerID="56a9d692ca3faaaecffb66879aa5bb6b9a40d21c1dbd4b0047d42a2bda0f4732" Namespace="calico-apiserver" Pod="calico-apiserver-9d8b896f8-4w9rr" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-calico--apiserver--9d8b896f8--4w9rr-eth0" Oct 27 08:25:22.672700 containerd[1614]: 2025-10-27 08:25:22.650 [INFO][4069] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="56a9d692ca3faaaecffb66879aa5bb6b9a40d21c1dbd4b0047d42a2bda0f4732" Namespace="calico-apiserver" Pod="calico-apiserver-9d8b896f8-4w9rr" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-calico--apiserver--9d8b896f8--4w9rr-eth0" Oct 27 08:25:22.672814 containerd[1614]: 2025-10-27 08:25:22.650 [INFO][4069] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="56a9d692ca3faaaecffb66879aa5bb6b9a40d21c1dbd4b0047d42a2bda0f4732" Namespace="calico-apiserver" Pod="calico-apiserver-9d8b896f8-4w9rr" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-calico--apiserver--9d8b896f8--4w9rr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--k--8ed45c9b51-k8s-calico--apiserver--9d8b896f8--4w9rr-eth0", GenerateName:"calico-apiserver-9d8b896f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ff60fd1-1efa-4af7-a609-59d81e9c7a0f", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9d8b896f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-k-8ed45c9b51", ContainerID:"56a9d692ca3faaaecffb66879aa5bb6b9a40d21c1dbd4b0047d42a2bda0f4732", Pod:"calico-apiserver-9d8b896f8-4w9rr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliccfbfc2c7ee", MAC:"7a:bf:d7:2e:98:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:25:22.672909 containerd[1614]: 2025-10-27 08:25:22.668 [INFO][4069] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="56a9d692ca3faaaecffb66879aa5bb6b9a40d21c1dbd4b0047d42a2bda0f4732" Namespace="calico-apiserver" Pod="calico-apiserver-9d8b896f8-4w9rr" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-calico--apiserver--9d8b896f8--4w9rr-eth0" Oct 27 08:25:22.707333 containerd[1614]: time="2025-10-27T08:25:22.707254849Z" level=info msg="connecting to shim 56a9d692ca3faaaecffb66879aa5bb6b9a40d21c1dbd4b0047d42a2bda0f4732" address="unix:///run/containerd/s/35df4f3040581d41b7d6ec4e805a581e83ffbed0167e127af3019e84bb5a310d" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:25:22.762988 kubelet[2775]: E1027 08:25:22.762030 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b7d4c49f6-pwfq6" podUID="d9ea689c-020a-4c9b-8879-a266b7c116e3" Oct 27 08:25:22.798862 systemd[1]: Started cri-containerd-56a9d692ca3faaaecffb66879aa5bb6b9a40d21c1dbd4b0047d42a2bda0f4732.scope - libcontainer container 56a9d692ca3faaaecffb66879aa5bb6b9a40d21c1dbd4b0047d42a2bda0f4732. Oct 27 08:25:22.811362 systemd-networkd[1482]: cali41882979151: Link UP Oct 27 08:25:22.814823 systemd-networkd[1482]: cali41882979151: Gained carrier Oct 27 08:25:22.855595 containerd[1614]: 2025-10-27 08:25:22.515 [INFO][4071] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 27 08:25:22.855595 containerd[1614]: 2025-10-27 08:25:22.539 [INFO][4071] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999.9.9--k--8ed45c9b51-k8s-coredns--674b8bbfcf--t6j5w-eth0 coredns-674b8bbfcf- kube-system 4b90e5aa-be4c-4511-9d4a-30c3f10ad641 842 0 2025-10-27 08:24:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-9999.9.9-k-8ed45c9b51 coredns-674b8bbfcf-t6j5w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali41882979151 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3274cea972b3a4fa5ea1be9508ef23d894bf31fe86b6a4a593e12ef3ab3cb852" Namespace="kube-system" Pod="coredns-674b8bbfcf-t6j5w" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-coredns--674b8bbfcf--t6j5w-" Oct 27 08:25:22.855595 containerd[1614]: 2025-10-27 08:25:22.539 [INFO][4071] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3274cea972b3a4fa5ea1be9508ef23d894bf31fe86b6a4a593e12ef3ab3cb852" Namespace="kube-system" Pod="coredns-674b8bbfcf-t6j5w" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-coredns--674b8bbfcf--t6j5w-eth0" Oct 27 08:25:22.855595 containerd[1614]: 2025-10-27 08:25:22.599 [INFO][4099] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3274cea972b3a4fa5ea1be9508ef23d894bf31fe86b6a4a593e12ef3ab3cb852" HandleID="k8s-pod-network.3274cea972b3a4fa5ea1be9508ef23d894bf31fe86b6a4a593e12ef3ab3cb852" Workload="ci--9999.9.9--k--8ed45c9b51-k8s-coredns--674b8bbfcf--t6j5w-eth0" Oct 27 08:25:22.855980 containerd[1614]: 2025-10-27 08:25:22.599 [INFO][4099] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3274cea972b3a4fa5ea1be9508ef23d894bf31fe86b6a4a593e12ef3ab3cb852" HandleID="k8s-pod-network.3274cea972b3a4fa5ea1be9508ef23d894bf31fe86b6a4a593e12ef3ab3cb852" Workload="ci--9999.9.9--k--8ed45c9b51-k8s-coredns--674b8bbfcf--t6j5w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5640), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-9999.9.9-k-8ed45c9b51", "pod":"coredns-674b8bbfcf-t6j5w", "timestamp":"2025-10-27 08:25:22.599138931 +0000 UTC"}, Hostname:"ci-9999.9.9-k-8ed45c9b51", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:25:22.855980 containerd[1614]: 2025-10-27 08:25:22.599 [INFO][4099] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:25:22.855980 containerd[1614]: 2025-10-27 08:25:22.635 [INFO][4099] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:25:22.855980 containerd[1614]: 2025-10-27 08:25:22.635 [INFO][4099] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999.9.9-k-8ed45c9b51' Oct 27 08:25:22.855980 containerd[1614]: 2025-10-27 08:25:22.696 [INFO][4099] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3274cea972b3a4fa5ea1be9508ef23d894bf31fe86b6a4a593e12ef3ab3cb852" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:22.855980 containerd[1614]: 2025-10-27 08:25:22.708 [INFO][4099] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:22.855980 containerd[1614]: 2025-10-27 08:25:22.716 [INFO][4099] ipam/ipam.go 511: Trying affinity for 192.168.113.192/26 host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:22.855980 containerd[1614]: 2025-10-27 08:25:22.723 [INFO][4099] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.192/26 host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:22.855980 containerd[1614]: 2025-10-27 08:25:22.729 [INFO][4099] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.192/26 host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:22.856433 containerd[1614]: 2025-10-27 08:25:22.730 [INFO][4099] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.192/26 handle="k8s-pod-network.3274cea972b3a4fa5ea1be9508ef23d894bf31fe86b6a4a593e12ef3ab3cb852" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:22.856433 containerd[1614]: 2025-10-27 08:25:22.740 [INFO][4099] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3274cea972b3a4fa5ea1be9508ef23d894bf31fe86b6a4a593e12ef3ab3cb852 Oct 27 08:25:22.856433 containerd[1614]: 2025-10-27 08:25:22.757 [INFO][4099] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.192/26 handle="k8s-pod-network.3274cea972b3a4fa5ea1be9508ef23d894bf31fe86b6a4a593e12ef3ab3cb852" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:22.856433 containerd[1614]: 2025-10-27 08:25:22.775 [INFO][4099] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.195/26] block=192.168.113.192/26 handle="k8s-pod-network.3274cea972b3a4fa5ea1be9508ef23d894bf31fe86b6a4a593e12ef3ab3cb852" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:22.856433 containerd[1614]: 2025-10-27 08:25:22.777 [INFO][4099] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.195/26] handle="k8s-pod-network.3274cea972b3a4fa5ea1be9508ef23d894bf31fe86b6a4a593e12ef3ab3cb852" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:22.856433 containerd[1614]: 2025-10-27 08:25:22.777 [INFO][4099] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:25:22.856433 containerd[1614]: 2025-10-27 08:25:22.777 [INFO][4099] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.195/26] IPv6=[] ContainerID="3274cea972b3a4fa5ea1be9508ef23d894bf31fe86b6a4a593e12ef3ab3cb852" HandleID="k8s-pod-network.3274cea972b3a4fa5ea1be9508ef23d894bf31fe86b6a4a593e12ef3ab3cb852" Workload="ci--9999.9.9--k--8ed45c9b51-k8s-coredns--674b8bbfcf--t6j5w-eth0" Oct 27 08:25:22.856748 containerd[1614]: 2025-10-27 08:25:22.805 [INFO][4071] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3274cea972b3a4fa5ea1be9508ef23d894bf31fe86b6a4a593e12ef3ab3cb852" Namespace="kube-system" Pod="coredns-674b8bbfcf-t6j5w" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-coredns--674b8bbfcf--t6j5w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--k--8ed45c9b51-k8s-coredns--674b8bbfcf--t6j5w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4b90e5aa-be4c-4511-9d4a-30c3f10ad641", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-k-8ed45c9b51", ContainerID:"", Pod:"coredns-674b8bbfcf-t6j5w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali41882979151", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:25:22.856748 containerd[1614]: 2025-10-27 08:25:22.805 [INFO][4071] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.195/32] ContainerID="3274cea972b3a4fa5ea1be9508ef23d894bf31fe86b6a4a593e12ef3ab3cb852" Namespace="kube-system" Pod="coredns-674b8bbfcf-t6j5w" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-coredns--674b8bbfcf--t6j5w-eth0" Oct 27 08:25:22.856748 containerd[1614]: 2025-10-27 08:25:22.805 [INFO][4071] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali41882979151 ContainerID="3274cea972b3a4fa5ea1be9508ef23d894bf31fe86b6a4a593e12ef3ab3cb852" Namespace="kube-system" Pod="coredns-674b8bbfcf-t6j5w" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-coredns--674b8bbfcf--t6j5w-eth0" Oct 27 08:25:22.856748 containerd[1614]: 2025-10-27 08:25:22.810 [INFO][4071] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3274cea972b3a4fa5ea1be9508ef23d894bf31fe86b6a4a593e12ef3ab3cb852" Namespace="kube-system" Pod="coredns-674b8bbfcf-t6j5w" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-coredns--674b8bbfcf--t6j5w-eth0" Oct 27 08:25:22.856748 containerd[1614]: 2025-10-27 08:25:22.810 [INFO][4071] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3274cea972b3a4fa5ea1be9508ef23d894bf31fe86b6a4a593e12ef3ab3cb852" Namespace="kube-system" Pod="coredns-674b8bbfcf-t6j5w" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-coredns--674b8bbfcf--t6j5w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--k--8ed45c9b51-k8s-coredns--674b8bbfcf--t6j5w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4b90e5aa-be4c-4511-9d4a-30c3f10ad641", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-k-8ed45c9b51", ContainerID:"3274cea972b3a4fa5ea1be9508ef23d894bf31fe86b6a4a593e12ef3ab3cb852", Pod:"coredns-674b8bbfcf-t6j5w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali41882979151", MAC:"56:c5:f8:fd:4a:bc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:25:22.856748 containerd[1614]: 2025-10-27 08:25:22.843 [INFO][4071] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3274cea972b3a4fa5ea1be9508ef23d894bf31fe86b6a4a593e12ef3ab3cb852" Namespace="kube-system" Pod="coredns-674b8bbfcf-t6j5w" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-coredns--674b8bbfcf--t6j5w-eth0" Oct 27 08:25:22.900681 containerd[1614]: time="2025-10-27T08:25:22.900621450Z" level=info msg="connecting to shim 3274cea972b3a4fa5ea1be9508ef23d894bf31fe86b6a4a593e12ef3ab3cb852" address="unix:///run/containerd/s/41f93b1ef4463a59ddcbab1fb28a48a56ef323632f7bfd1312edcfbe4e035349" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:25:22.936748 containerd[1614]: time="2025-10-27T08:25:22.936690795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d8b896f8-4w9rr,Uid:0ff60fd1-1efa-4af7-a609-59d81e9c7a0f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"56a9d692ca3faaaecffb66879aa5bb6b9a40d21c1dbd4b0047d42a2bda0f4732\"" Oct 27 08:25:22.940597 containerd[1614]: time="2025-10-27T08:25:22.939649472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:25:22.962853 systemd[1]: Started cri-containerd-3274cea972b3a4fa5ea1be9508ef23d894bf31fe86b6a4a593e12ef3ab3cb852.scope - libcontainer container 3274cea972b3a4fa5ea1be9508ef23d894bf31fe86b6a4a593e12ef3ab3cb852. Oct 27 08:25:23.056152 containerd[1614]: time="2025-10-27T08:25:23.055999900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t6j5w,Uid:4b90e5aa-be4c-4511-9d4a-30c3f10ad641,Namespace:kube-system,Attempt:0,} returns sandbox id \"3274cea972b3a4fa5ea1be9508ef23d894bf31fe86b6a4a593e12ef3ab3cb852\"" Oct 27 08:25:23.059359 kubelet[2775]: E1027 08:25:23.059307 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:25:23.065590 containerd[1614]: time="2025-10-27T08:25:23.065520187Z" level=info msg="CreateContainer within sandbox \"3274cea972b3a4fa5ea1be9508ef23d894bf31fe86b6a4a593e12ef3ab3cb852\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 27 08:25:23.079407 containerd[1614]: time="2025-10-27T08:25:23.079286985Z" level=info msg="Container 0041ec8c522440c823cb7693a2fbe6984716e70e224dd22b79b84a2bc39a94d0: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:25:23.089575 containerd[1614]: time="2025-10-27T08:25:23.089484763Z" level=info msg="CreateContainer within sandbox \"3274cea972b3a4fa5ea1be9508ef23d894bf31fe86b6a4a593e12ef3ab3cb852\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0041ec8c522440c823cb7693a2fbe6984716e70e224dd22b79b84a2bc39a94d0\"" Oct 27 08:25:23.090532 containerd[1614]: time="2025-10-27T08:25:23.090459937Z" level=info msg="StartContainer for \"0041ec8c522440c823cb7693a2fbe6984716e70e224dd22b79b84a2bc39a94d0\"" Oct 27 08:25:23.093072 containerd[1614]: time="2025-10-27T08:25:23.092978035Z" level=info msg="connecting to shim 0041ec8c522440c823cb7693a2fbe6984716e70e224dd22b79b84a2bc39a94d0" address="unix:///run/containerd/s/41f93b1ef4463a59ddcbab1fb28a48a56ef323632f7bfd1312edcfbe4e035349" protocol=ttrpc version=3 Oct 27 08:25:23.136347 systemd[1]: Started cri-containerd-0041ec8c522440c823cb7693a2fbe6984716e70e224dd22b79b84a2bc39a94d0.scope - libcontainer container 0041ec8c522440c823cb7693a2fbe6984716e70e224dd22b79b84a2bc39a94d0. Oct 27 08:25:23.227552 containerd[1614]: time="2025-10-27T08:25:23.227483266Z" level=info msg="StartContainer for \"0041ec8c522440c823cb7693a2fbe6984716e70e224dd22b79b84a2bc39a94d0\" returns successfully" Oct 27 08:25:23.261438 containerd[1614]: time="2025-10-27T08:25:23.261383423Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:25:23.262670 containerd[1614]: time="2025-10-27T08:25:23.262613257Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:25:23.262960 containerd[1614]: time="2025-10-27T08:25:23.262630788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:25:23.263255 kubelet[2775]: E1027 08:25:23.263180 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:25:23.263940 kubelet[2775]: E1027 08:25:23.263245 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:25:23.264740 kubelet[2775]: E1027 08:25:23.264688 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mb4zh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9d8b896f8-4w9rr_calico-apiserver(0ff60fd1-1efa-4af7-a609-59d81e9c7a0f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:25:23.266403 kubelet[2775]: E1027 08:25:23.266250 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d8b896f8-4w9rr" podUID="0ff60fd1-1efa-4af7-a609-59d81e9c7a0f" Oct 27 08:25:23.460916 containerd[1614]: time="2025-10-27T08:25:23.460600788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d8b896f8-z6l9g,Uid:606a9dd0-b52b-437b-ae5f-d5d2e13b6421,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:25:23.461593 containerd[1614]: time="2025-10-27T08:25:23.460776966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x85mb,Uid:3d26e3cf-e6a3-4346-be05-bc637815bb23,Namespace:calico-system,Attempt:0,}" Oct 27 08:25:23.764464 kubelet[2775]: E1027 08:25:23.763426 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:25:23.771800 kubelet[2775]: E1027 08:25:23.771664 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d8b896f8-4w9rr" podUID="0ff60fd1-1efa-4af7-a609-59d81e9c7a0f" Oct 27 08:25:23.857570 systemd-networkd[1482]: caliaa2750d0413: Link UP Oct 27 08:25:23.861395 systemd-networkd[1482]: caliaa2750d0413: Gained carrier Oct 27 08:25:23.891422 kubelet[2775]: I1027 08:25:23.891350 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-t6j5w" podStartSLOduration=44.89132665 podStartE2EDuration="44.89132665s" podCreationTimestamp="2025-10-27 08:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:25:23.828030316 +0000 UTC m=+51.556588141" watchObservedRunningTime="2025-10-27 08:25:23.89132665 +0000 UTC m=+51.619884472" Oct 27 08:25:23.905536 containerd[1614]: 2025-10-27 08:25:23.579 [INFO][4263] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 27 08:25:23.905536 containerd[1614]: 2025-10-27 08:25:23.612 [INFO][4263] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999.9.9--k--8ed45c9b51-k8s-goldmane--666569f655--x85mb-eth0 goldmane-666569f655- calico-system 3d26e3cf-e6a3-4346-be05-bc637815bb23 848 0 2025-10-27 08:24:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-9999.9.9-k-8ed45c9b51 goldmane-666569f655-x85mb eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliaa2750d0413 [] [] }} ContainerID="c4845a1bac78bf032e27b63dab3dd17a14b564fd1fec0cd6ba4453029007ce92" Namespace="calico-system" Pod="goldmane-666569f655-x85mb" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-goldmane--666569f655--x85mb-" Oct 27 08:25:23.905536 containerd[1614]: 2025-10-27 08:25:23.612 [INFO][4263] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c4845a1bac78bf032e27b63dab3dd17a14b564fd1fec0cd6ba4453029007ce92" Namespace="calico-system" Pod="goldmane-666569f655-x85mb" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-goldmane--666569f655--x85mb-eth0" Oct 27 08:25:23.905536 containerd[1614]: 2025-10-27 08:25:23.709 [INFO][4286] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4845a1bac78bf032e27b63dab3dd17a14b564fd1fec0cd6ba4453029007ce92" HandleID="k8s-pod-network.c4845a1bac78bf032e27b63dab3dd17a14b564fd1fec0cd6ba4453029007ce92" Workload="ci--9999.9.9--k--8ed45c9b51-k8s-goldmane--666569f655--x85mb-eth0" Oct 27 08:25:23.905536 containerd[1614]: 2025-10-27 08:25:23.710 [INFO][4286] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c4845a1bac78bf032e27b63dab3dd17a14b564fd1fec0cd6ba4453029007ce92" HandleID="k8s-pod-network.c4845a1bac78bf032e27b63dab3dd17a14b564fd1fec0cd6ba4453029007ce92" Workload="ci--9999.9.9--k--8ed45c9b51-k8s-goldmane--666569f655--x85mb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c4510), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-9999.9.9-k-8ed45c9b51", "pod":"goldmane-666569f655-x85mb", "timestamp":"2025-10-27 08:25:23.709026711 +0000 UTC"}, Hostname:"ci-9999.9.9-k-8ed45c9b51", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:25:23.905536 containerd[1614]: 2025-10-27 08:25:23.710 [INFO][4286] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:25:23.905536 containerd[1614]: 2025-10-27 08:25:23.711 [INFO][4286] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:25:23.905536 containerd[1614]: 2025-10-27 08:25:23.711 [INFO][4286] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999.9.9-k-8ed45c9b51' Oct 27 08:25:23.905536 containerd[1614]: 2025-10-27 08:25:23.728 [INFO][4286] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c4845a1bac78bf032e27b63dab3dd17a14b564fd1fec0cd6ba4453029007ce92" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:23.905536 containerd[1614]: 2025-10-27 08:25:23.737 [INFO][4286] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:23.905536 containerd[1614]: 2025-10-27 08:25:23.752 [INFO][4286] ipam/ipam.go 511: Trying affinity for 192.168.113.192/26 host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:23.905536 containerd[1614]: 2025-10-27 08:25:23.758 [INFO][4286] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.192/26 host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:23.905536 containerd[1614]: 2025-10-27 08:25:23.766 [INFO][4286] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.192/26 host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:23.905536 containerd[1614]: 2025-10-27 08:25:23.768 [INFO][4286] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.192/26 handle="k8s-pod-network.c4845a1bac78bf032e27b63dab3dd17a14b564fd1fec0cd6ba4453029007ce92" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:23.905536 containerd[1614]: 2025-10-27 08:25:23.776 [INFO][4286] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c4845a1bac78bf032e27b63dab3dd17a14b564fd1fec0cd6ba4453029007ce92 Oct 27 08:25:23.905536 containerd[1614]: 2025-10-27 08:25:23.804 [INFO][4286] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.192/26 handle="k8s-pod-network.c4845a1bac78bf032e27b63dab3dd17a14b564fd1fec0cd6ba4453029007ce92" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:23.905536 containerd[1614]: 2025-10-27 08:25:23.841 [INFO][4286] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.196/26] block=192.168.113.192/26 handle="k8s-pod-network.c4845a1bac78bf032e27b63dab3dd17a14b564fd1fec0cd6ba4453029007ce92" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:23.905536 containerd[1614]: 2025-10-27 08:25:23.841 [INFO][4286] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.196/26] handle="k8s-pod-network.c4845a1bac78bf032e27b63dab3dd17a14b564fd1fec0cd6ba4453029007ce92" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:23.905536 containerd[1614]: 2025-10-27 08:25:23.841 [INFO][4286] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:25:23.905536 containerd[1614]: 2025-10-27 08:25:23.841 [INFO][4286] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.196/26] IPv6=[] ContainerID="c4845a1bac78bf032e27b63dab3dd17a14b564fd1fec0cd6ba4453029007ce92" HandleID="k8s-pod-network.c4845a1bac78bf032e27b63dab3dd17a14b564fd1fec0cd6ba4453029007ce92" Workload="ci--9999.9.9--k--8ed45c9b51-k8s-goldmane--666569f655--x85mb-eth0" Oct 27 08:25:23.908324 containerd[1614]: 2025-10-27 08:25:23.849 [INFO][4263] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c4845a1bac78bf032e27b63dab3dd17a14b564fd1fec0cd6ba4453029007ce92" Namespace="calico-system" Pod="goldmane-666569f655-x85mb" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-goldmane--666569f655--x85mb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--k--8ed45c9b51-k8s-goldmane--666569f655--x85mb-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"3d26e3cf-e6a3-4346-be05-bc637815bb23", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 24, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-k-8ed45c9b51", ContainerID:"", Pod:"goldmane-666569f655-x85mb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.113.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliaa2750d0413", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:25:23.908324 containerd[1614]: 2025-10-27 08:25:23.850 [INFO][4263] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.196/32] ContainerID="c4845a1bac78bf032e27b63dab3dd17a14b564fd1fec0cd6ba4453029007ce92" Namespace="calico-system" Pod="goldmane-666569f655-x85mb" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-goldmane--666569f655--x85mb-eth0" Oct 27 08:25:23.908324 containerd[1614]: 2025-10-27 08:25:23.850 [INFO][4263] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa2750d0413 ContainerID="c4845a1bac78bf032e27b63dab3dd17a14b564fd1fec0cd6ba4453029007ce92" Namespace="calico-system" Pod="goldmane-666569f655-x85mb" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-goldmane--666569f655--x85mb-eth0" Oct 27 08:25:23.908324 containerd[1614]: 2025-10-27 08:25:23.863 [INFO][4263] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4845a1bac78bf032e27b63dab3dd17a14b564fd1fec0cd6ba4453029007ce92" Namespace="calico-system" Pod="goldmane-666569f655-x85mb" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-goldmane--666569f655--x85mb-eth0" Oct 27 08:25:23.908324 containerd[1614]: 2025-10-27 08:25:23.865 [INFO][4263] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c4845a1bac78bf032e27b63dab3dd17a14b564fd1fec0cd6ba4453029007ce92" Namespace="calico-system" Pod="goldmane-666569f655-x85mb" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-goldmane--666569f655--x85mb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--k--8ed45c9b51-k8s-goldmane--666569f655--x85mb-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"3d26e3cf-e6a3-4346-be05-bc637815bb23", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 24, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-k-8ed45c9b51", ContainerID:"c4845a1bac78bf032e27b63dab3dd17a14b564fd1fec0cd6ba4453029007ce92", Pod:"goldmane-666569f655-x85mb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.113.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliaa2750d0413", MAC:"5a:a2:92:a6:2e:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:25:23.908324 containerd[1614]: 2025-10-27 08:25:23.897 [INFO][4263] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c4845a1bac78bf032e27b63dab3dd17a14b564fd1fec0cd6ba4453029007ce92" Namespace="calico-system" Pod="goldmane-666569f655-x85mb" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-goldmane--666569f655--x85mb-eth0" Oct 27 08:25:23.969576 containerd[1614]: time="2025-10-27T08:25:23.967198044Z" level=info msg="connecting to shim c4845a1bac78bf032e27b63dab3dd17a14b564fd1fec0cd6ba4453029007ce92" address="unix:///run/containerd/s/a3dbd61aa38c806134d9e9d5a4abf9baadecafd0790f54a69b748e31dbf7d4e2" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:25:23.970836 systemd-networkd[1482]: cali41882979151: Gained IPv6LL Oct 27 08:25:24.060958 systemd[1]: Started cri-containerd-c4845a1bac78bf032e27b63dab3dd17a14b564fd1fec0cd6ba4453029007ce92.scope - libcontainer container c4845a1bac78bf032e27b63dab3dd17a14b564fd1fec0cd6ba4453029007ce92. Oct 27 08:25:24.123759 systemd-networkd[1482]: cali7165df9072b: Link UP Oct 27 08:25:24.124047 systemd-networkd[1482]: cali7165df9072b: Gained carrier Oct 27 08:25:24.152998 containerd[1614]: 2025-10-27 08:25:23.600 [INFO][4267] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 27 08:25:24.152998 containerd[1614]: 2025-10-27 08:25:23.648 [INFO][4267] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999.9.9--k--8ed45c9b51-k8s-calico--apiserver--9d8b896f8--z6l9g-eth0 calico-apiserver-9d8b896f8- calico-apiserver 606a9dd0-b52b-437b-ae5f-d5d2e13b6421 850 0 2025-10-27 08:24:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9d8b896f8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-9999.9.9-k-8ed45c9b51 calico-apiserver-9d8b896f8-z6l9g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7165df9072b [] [] }} ContainerID="11381e8a7d92340a8c07aecaad278c663920ac575ef32bc6815c80a84b96ef69" Namespace="calico-apiserver" Pod="calico-apiserver-9d8b896f8-z6l9g" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-calico--apiserver--9d8b896f8--z6l9g-" Oct 27 08:25:24.152998 containerd[1614]: 2025-10-27 08:25:23.648 [INFO][4267] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="11381e8a7d92340a8c07aecaad278c663920ac575ef32bc6815c80a84b96ef69" Namespace="calico-apiserver" Pod="calico-apiserver-9d8b896f8-z6l9g" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-calico--apiserver--9d8b896f8--z6l9g-eth0" Oct 27 08:25:24.152998 containerd[1614]: 2025-10-27 08:25:23.725 [INFO][4291] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11381e8a7d92340a8c07aecaad278c663920ac575ef32bc6815c80a84b96ef69" HandleID="k8s-pod-network.11381e8a7d92340a8c07aecaad278c663920ac575ef32bc6815c80a84b96ef69" Workload="ci--9999.9.9--k--8ed45c9b51-k8s-calico--apiserver--9d8b896f8--z6l9g-eth0" Oct 27 08:25:24.152998 containerd[1614]: 2025-10-27 08:25:23.726 [INFO][4291] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="11381e8a7d92340a8c07aecaad278c663920ac575ef32bc6815c80a84b96ef69" HandleID="k8s-pod-network.11381e8a7d92340a8c07aecaad278c663920ac575ef32bc6815c80a84b96ef69" Workload="ci--9999.9.9--k--8ed45c9b51-k8s-calico--apiserver--9d8b896f8--z6l9g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000324b80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-9999.9.9-k-8ed45c9b51", "pod":"calico-apiserver-9d8b896f8-z6l9g", "timestamp":"2025-10-27 08:25:23.725321205 +0000 UTC"}, Hostname:"ci-9999.9.9-k-8ed45c9b51", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:25:24.152998 containerd[1614]: 2025-10-27 08:25:23.726 [INFO][4291] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:25:24.152998 containerd[1614]: 2025-10-27 08:25:23.842 [INFO][4291] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:25:24.152998 containerd[1614]: 2025-10-27 08:25:23.842 [INFO][4291] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999.9.9-k-8ed45c9b51' Oct 27 08:25:24.152998 containerd[1614]: 2025-10-27 08:25:23.887 [INFO][4291] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.11381e8a7d92340a8c07aecaad278c663920ac575ef32bc6815c80a84b96ef69" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:24.152998 containerd[1614]: 2025-10-27 08:25:23.948 [INFO][4291] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:24.152998 containerd[1614]: 2025-10-27 08:25:24.023 [INFO][4291] ipam/ipam.go 511: Trying affinity for 192.168.113.192/26 host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:24.152998 containerd[1614]: 2025-10-27 08:25:24.046 [INFO][4291] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.192/26 host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:24.152998 containerd[1614]: 2025-10-27 08:25:24.058 [INFO][4291] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.192/26 host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:24.152998 containerd[1614]: 2025-10-27 08:25:24.062 [INFO][4291] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.192/26 handle="k8s-pod-network.11381e8a7d92340a8c07aecaad278c663920ac575ef32bc6815c80a84b96ef69" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:24.152998 containerd[1614]: 2025-10-27 08:25:24.083 [INFO][4291] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.11381e8a7d92340a8c07aecaad278c663920ac575ef32bc6815c80a84b96ef69 Oct 27 08:25:24.152998 containerd[1614]: 2025-10-27 08:25:24.091 [INFO][4291] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.192/26 handle="k8s-pod-network.11381e8a7d92340a8c07aecaad278c663920ac575ef32bc6815c80a84b96ef69" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:24.152998 containerd[1614]: 2025-10-27 08:25:24.114 [INFO][4291] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.197/26] block=192.168.113.192/26 handle="k8s-pod-network.11381e8a7d92340a8c07aecaad278c663920ac575ef32bc6815c80a84b96ef69" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:24.152998 containerd[1614]: 2025-10-27 08:25:24.114 [INFO][4291] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.197/26] handle="k8s-pod-network.11381e8a7d92340a8c07aecaad278c663920ac575ef32bc6815c80a84b96ef69" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:24.152998 containerd[1614]: 2025-10-27 08:25:24.114 [INFO][4291] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:25:24.152998 containerd[1614]: 2025-10-27 08:25:24.114 [INFO][4291] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.197/26] IPv6=[] ContainerID="11381e8a7d92340a8c07aecaad278c663920ac575ef32bc6815c80a84b96ef69" HandleID="k8s-pod-network.11381e8a7d92340a8c07aecaad278c663920ac575ef32bc6815c80a84b96ef69" Workload="ci--9999.9.9--k--8ed45c9b51-k8s-calico--apiserver--9d8b896f8--z6l9g-eth0" Oct 27 08:25:24.154642 containerd[1614]: 2025-10-27 08:25:24.118 [INFO][4267] cni-plugin/k8s.go 418: Populated endpoint ContainerID="11381e8a7d92340a8c07aecaad278c663920ac575ef32bc6815c80a84b96ef69" Namespace="calico-apiserver" Pod="calico-apiserver-9d8b896f8-z6l9g" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-calico--apiserver--9d8b896f8--z6l9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--k--8ed45c9b51-k8s-calico--apiserver--9d8b896f8--z6l9g-eth0", GenerateName:"calico-apiserver-9d8b896f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"606a9dd0-b52b-437b-ae5f-d5d2e13b6421", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9d8b896f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-k-8ed45c9b51", ContainerID:"", Pod:"calico-apiserver-9d8b896f8-z6l9g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7165df9072b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:25:24.154642 containerd[1614]: 2025-10-27 08:25:24.118 [INFO][4267] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.197/32] ContainerID="11381e8a7d92340a8c07aecaad278c663920ac575ef32bc6815c80a84b96ef69" Namespace="calico-apiserver" Pod="calico-apiserver-9d8b896f8-z6l9g" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-calico--apiserver--9d8b896f8--z6l9g-eth0" Oct 27 08:25:24.154642 containerd[1614]: 2025-10-27 08:25:24.118 [INFO][4267] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7165df9072b ContainerID="11381e8a7d92340a8c07aecaad278c663920ac575ef32bc6815c80a84b96ef69" Namespace="calico-apiserver" Pod="calico-apiserver-9d8b896f8-z6l9g" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-calico--apiserver--9d8b896f8--z6l9g-eth0" Oct 27 08:25:24.154642 containerd[1614]: 2025-10-27 08:25:24.125 [INFO][4267] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11381e8a7d92340a8c07aecaad278c663920ac575ef32bc6815c80a84b96ef69" Namespace="calico-apiserver" Pod="calico-apiserver-9d8b896f8-z6l9g" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-calico--apiserver--9d8b896f8--z6l9g-eth0" Oct 27 08:25:24.154642 containerd[1614]: 2025-10-27 08:25:24.126 [INFO][4267] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="11381e8a7d92340a8c07aecaad278c663920ac575ef32bc6815c80a84b96ef69" Namespace="calico-apiserver" Pod="calico-apiserver-9d8b896f8-z6l9g" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-calico--apiserver--9d8b896f8--z6l9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--k--8ed45c9b51-k8s-calico--apiserver--9d8b896f8--z6l9g-eth0", GenerateName:"calico-apiserver-9d8b896f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"606a9dd0-b52b-437b-ae5f-d5d2e13b6421", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9d8b896f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-k-8ed45c9b51", ContainerID:"11381e8a7d92340a8c07aecaad278c663920ac575ef32bc6815c80a84b96ef69", Pod:"calico-apiserver-9d8b896f8-z6l9g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7165df9072b", MAC:"5e:e7:20:24:2f:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:25:24.154642 containerd[1614]: 2025-10-27 08:25:24.146 [INFO][4267] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="11381e8a7d92340a8c07aecaad278c663920ac575ef32bc6815c80a84b96ef69" Namespace="calico-apiserver" Pod="calico-apiserver-9d8b896f8-z6l9g" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-calico--apiserver--9d8b896f8--z6l9g-eth0" Oct 27 08:25:24.201867 containerd[1614]: time="2025-10-27T08:25:24.201389203Z" level=info msg="connecting to shim 11381e8a7d92340a8c07aecaad278c663920ac575ef32bc6815c80a84b96ef69" address="unix:///run/containerd/s/2b391bf9cb9d8445c39c7f74e0adf4cf6c98c27192f6b2be36bb283348067f6f" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:25:24.263844 systemd[1]: Started cri-containerd-11381e8a7d92340a8c07aecaad278c663920ac575ef32bc6815c80a84b96ef69.scope - libcontainer container 11381e8a7d92340a8c07aecaad278c663920ac575ef32bc6815c80a84b96ef69. Oct 27 08:25:24.346613 containerd[1614]: time="2025-10-27T08:25:24.345629876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x85mb,Uid:3d26e3cf-e6a3-4346-be05-bc637815bb23,Namespace:calico-system,Attempt:0,} returns sandbox id \"c4845a1bac78bf032e27b63dab3dd17a14b564fd1fec0cd6ba4453029007ce92\"" Oct 27 08:25:24.352079 containerd[1614]: time="2025-10-27T08:25:24.351982113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 08:25:24.355716 systemd-networkd[1482]: caliccfbfc2c7ee: Gained IPv6LL Oct 27 08:25:24.419238 containerd[1614]: time="2025-10-27T08:25:24.419091812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d8b896f8-z6l9g,Uid:606a9dd0-b52b-437b-ae5f-d5d2e13b6421,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"11381e8a7d92340a8c07aecaad278c663920ac575ef32bc6815c80a84b96ef69\"" Oct 27 08:25:24.748740 containerd[1614]: time="2025-10-27T08:25:24.748562323Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:25:24.749730 containerd[1614]: time="2025-10-27T08:25:24.749608322Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 08:25:24.749730 containerd[1614]: time="2025-10-27T08:25:24.749623520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 27 08:25:24.750227 kubelet[2775]: E1027 08:25:24.750038 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:25:24.750227 kubelet[2775]: E1027 08:25:24.750100 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:25:24.750850 kubelet[2775]: E1027 08:25:24.750394 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vkflg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-x85mb_calico-system(3d26e3cf-e6a3-4346-be05-bc637815bb23): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 08:25:24.751522 containerd[1614]: time="2025-10-27T08:25:24.751104139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:25:24.752552 kubelet[2775]: E1027 08:25:24.752262 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x85mb" podUID="3d26e3cf-e6a3-4346-be05-bc637815bb23" Oct 27 08:25:24.777662 kubelet[2775]: E1027 08:25:24.777587 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d8b896f8-4w9rr" podUID="0ff60fd1-1efa-4af7-a609-59d81e9c7a0f" Oct 27 08:25:24.778594 kubelet[2775]: E1027 08:25:24.778553 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x85mb" podUID="3d26e3cf-e6a3-4346-be05-bc637815bb23" Oct 27 08:25:24.778818 kubelet[2775]: E1027 08:25:24.778798 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:25:24.965245 kubelet[2775]: I1027 08:25:24.965198 2775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 27 08:25:24.965662 kubelet[2775]: E1027 08:25:24.965637 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:25:25.059792 containerd[1614]: time="2025-10-27T08:25:25.059494788Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:25:25.060648 containerd[1614]: time="2025-10-27T08:25:25.060281665Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:25:25.060648 containerd[1614]: time="2025-10-27T08:25:25.060346500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:25:25.061037 kubelet[2775]: E1027 08:25:25.060975 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:25:25.061037 kubelet[2775]: E1027 08:25:25.061037 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:25:25.061492 kubelet[2775]: E1027 08:25:25.061203 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9slwc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9d8b896f8-z6l9g_calico-apiserver(606a9dd0-b52b-437b-ae5f-d5d2e13b6421): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:25:25.062480 kubelet[2775]: E1027 08:25:25.062416 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d8b896f8-z6l9g" podUID="606a9dd0-b52b-437b-ae5f-d5d2e13b6421" Oct 27 08:25:25.459442 kubelet[2775]: E1027 08:25:25.459069 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:25:25.460540 containerd[1614]: time="2025-10-27T08:25:25.459687047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jn8zm,Uid:9d02f52d-c9e1-4d0e-b6df-042109e24c03,Namespace:calico-system,Attempt:0,}" Oct 27 08:25:25.460540 containerd[1614]: time="2025-10-27T08:25:25.460107154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mg2vx,Uid:8378213a-6035-4955-9239-ea2d03ab7a24,Namespace:kube-system,Attempt:0,}" Oct 27 08:25:25.699054 systemd-networkd[1482]: caliaa2750d0413: Gained IPv6LL Oct 27 08:25:25.726755 systemd-networkd[1482]: calie1b5e5ece42: Link UP Oct 27 08:25:25.728655 systemd-networkd[1482]: calie1b5e5ece42: Gained carrier Oct 27 08:25:25.756638 containerd[1614]: 2025-10-27 08:25:25.566 [INFO][4463] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999.9.9--k--8ed45c9b51-k8s-coredns--674b8bbfcf--mg2vx-eth0 coredns-674b8bbfcf- kube-system 8378213a-6035-4955-9239-ea2d03ab7a24 854 0 2025-10-27 08:24:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-9999.9.9-k-8ed45c9b51 coredns-674b8bbfcf-mg2vx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie1b5e5ece42 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="94113bb05241474d84cf952c6e9cefb7bcae1c0f5f859530af0a977536b01214" Namespace="kube-system" Pod="coredns-674b8bbfcf-mg2vx" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-coredns--674b8bbfcf--mg2vx-" Oct 27 08:25:25.756638 containerd[1614]: 2025-10-27 08:25:25.567 [INFO][4463] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="94113bb05241474d84cf952c6e9cefb7bcae1c0f5f859530af0a977536b01214" Namespace="kube-system" Pod="coredns-674b8bbfcf-mg2vx" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-coredns--674b8bbfcf--mg2vx-eth0" Oct 27 08:25:25.756638 containerd[1614]: 2025-10-27 08:25:25.643 [INFO][4484] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94113bb05241474d84cf952c6e9cefb7bcae1c0f5f859530af0a977536b01214" HandleID="k8s-pod-network.94113bb05241474d84cf952c6e9cefb7bcae1c0f5f859530af0a977536b01214" Workload="ci--9999.9.9--k--8ed45c9b51-k8s-coredns--674b8bbfcf--mg2vx-eth0" Oct 27 08:25:25.756638 containerd[1614]: 2025-10-27 08:25:25.646 [INFO][4484] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="94113bb05241474d84cf952c6e9cefb7bcae1c0f5f859530af0a977536b01214" HandleID="k8s-pod-network.94113bb05241474d84cf952c6e9cefb7bcae1c0f5f859530af0a977536b01214" Workload="ci--9999.9.9--k--8ed45c9b51-k8s-coredns--674b8bbfcf--mg2vx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d57e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-9999.9.9-k-8ed45c9b51", "pod":"coredns-674b8bbfcf-mg2vx", "timestamp":"2025-10-27 08:25:25.643393959 +0000 UTC"}, Hostname:"ci-9999.9.9-k-8ed45c9b51", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:25:25.756638 containerd[1614]: 2025-10-27 08:25:25.646 [INFO][4484] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:25:25.756638 containerd[1614]: 2025-10-27 08:25:25.646 [INFO][4484] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:25:25.756638 containerd[1614]: 2025-10-27 08:25:25.646 [INFO][4484] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999.9.9-k-8ed45c9b51' Oct 27 08:25:25.756638 containerd[1614]: 2025-10-27 08:25:25.659 [INFO][4484] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.94113bb05241474d84cf952c6e9cefb7bcae1c0f5f859530af0a977536b01214" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:25.756638 containerd[1614]: 2025-10-27 08:25:25.666 [INFO][4484] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:25.756638 containerd[1614]: 2025-10-27 08:25:25.673 [INFO][4484] ipam/ipam.go 511: Trying affinity for 192.168.113.192/26 host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:25.756638 containerd[1614]: 2025-10-27 08:25:25.677 [INFO][4484] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.192/26 host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:25.756638 containerd[1614]: 2025-10-27 08:25:25.681 [INFO][4484] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.192/26 host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:25.756638 containerd[1614]: 2025-10-27 08:25:25.681 [INFO][4484] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.192/26 handle="k8s-pod-network.94113bb05241474d84cf952c6e9cefb7bcae1c0f5f859530af0a977536b01214" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:25.756638 containerd[1614]: 2025-10-27 08:25:25.683 [INFO][4484] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.94113bb05241474d84cf952c6e9cefb7bcae1c0f5f859530af0a977536b01214 Oct 27 08:25:25.756638 containerd[1614]: 2025-10-27 08:25:25.690 [INFO][4484] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.192/26 handle="k8s-pod-network.94113bb05241474d84cf952c6e9cefb7bcae1c0f5f859530af0a977536b01214" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:25.756638 containerd[1614]: 2025-10-27 08:25:25.701 [INFO][4484] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.198/26] block=192.168.113.192/26 handle="k8s-pod-network.94113bb05241474d84cf952c6e9cefb7bcae1c0f5f859530af0a977536b01214" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:25.756638 containerd[1614]: 2025-10-27 08:25:25.701 [INFO][4484] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.198/26] handle="k8s-pod-network.94113bb05241474d84cf952c6e9cefb7bcae1c0f5f859530af0a977536b01214" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:25.756638 containerd[1614]: 2025-10-27 08:25:25.706 [INFO][4484] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:25:25.756638 containerd[1614]: 2025-10-27 08:25:25.706 [INFO][4484] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.198/26] IPv6=[] ContainerID="94113bb05241474d84cf952c6e9cefb7bcae1c0f5f859530af0a977536b01214" HandleID="k8s-pod-network.94113bb05241474d84cf952c6e9cefb7bcae1c0f5f859530af0a977536b01214" Workload="ci--9999.9.9--k--8ed45c9b51-k8s-coredns--674b8bbfcf--mg2vx-eth0" Oct 27 08:25:25.758803 containerd[1614]: 2025-10-27 08:25:25.721 [INFO][4463] cni-plugin/k8s.go 418: Populated endpoint ContainerID="94113bb05241474d84cf952c6e9cefb7bcae1c0f5f859530af0a977536b01214" Namespace="kube-system" Pod="coredns-674b8bbfcf-mg2vx" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-coredns--674b8bbfcf--mg2vx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--k--8ed45c9b51-k8s-coredns--674b8bbfcf--mg2vx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8378213a-6035-4955-9239-ea2d03ab7a24", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-k-8ed45c9b51", ContainerID:"", Pod:"coredns-674b8bbfcf-mg2vx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie1b5e5ece42", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:25:25.758803 containerd[1614]: 2025-10-27 08:25:25.721 [INFO][4463] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.198/32] ContainerID="94113bb05241474d84cf952c6e9cefb7bcae1c0f5f859530af0a977536b01214" Namespace="kube-system" Pod="coredns-674b8bbfcf-mg2vx" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-coredns--674b8bbfcf--mg2vx-eth0" Oct 27 08:25:25.758803 containerd[1614]: 2025-10-27 08:25:25.721 [INFO][4463] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1b5e5ece42 ContainerID="94113bb05241474d84cf952c6e9cefb7bcae1c0f5f859530af0a977536b01214" Namespace="kube-system" Pod="coredns-674b8bbfcf-mg2vx" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-coredns--674b8bbfcf--mg2vx-eth0" Oct 27 08:25:25.758803 containerd[1614]: 2025-10-27 08:25:25.725 [INFO][4463] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94113bb05241474d84cf952c6e9cefb7bcae1c0f5f859530af0a977536b01214" Namespace="kube-system" Pod="coredns-674b8bbfcf-mg2vx" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-coredns--674b8bbfcf--mg2vx-eth0" Oct 27 08:25:25.758803 containerd[1614]: 2025-10-27 08:25:25.725 [INFO][4463] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="94113bb05241474d84cf952c6e9cefb7bcae1c0f5f859530af0a977536b01214" Namespace="kube-system" Pod="coredns-674b8bbfcf-mg2vx" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-coredns--674b8bbfcf--mg2vx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--k--8ed45c9b51-k8s-coredns--674b8bbfcf--mg2vx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8378213a-6035-4955-9239-ea2d03ab7a24", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-k-8ed45c9b51", ContainerID:"94113bb05241474d84cf952c6e9cefb7bcae1c0f5f859530af0a977536b01214", Pod:"coredns-674b8bbfcf-mg2vx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie1b5e5ece42", MAC:"52:40:66:84:59:76", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:25:25.758803 containerd[1614]: 2025-10-27 08:25:25.742 [INFO][4463] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="94113bb05241474d84cf952c6e9cefb7bcae1c0f5f859530af0a977536b01214" Namespace="kube-system" Pod="coredns-674b8bbfcf-mg2vx" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-coredns--674b8bbfcf--mg2vx-eth0" Oct 27 08:25:25.786545 kubelet[2775]: E1027 08:25:25.782742 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:25:25.786545 kubelet[2775]: E1027 08:25:25.786217 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:25:25.786954 kubelet[2775]: E1027 08:25:25.786915 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d8b896f8-z6l9g" podUID="606a9dd0-b52b-437b-ae5f-d5d2e13b6421" Oct 27 08:25:25.804792 kubelet[2775]: E1027 08:25:25.804742 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x85mb" podUID="3d26e3cf-e6a3-4346-be05-bc637815bb23" Oct 27 08:25:25.811764 containerd[1614]: time="2025-10-27T08:25:25.811715337Z" level=info msg="connecting to shim 94113bb05241474d84cf952c6e9cefb7bcae1c0f5f859530af0a977536b01214" address="unix:///run/containerd/s/108eb0212e24910737f3e964a73f6cc6ffa04978ece5556562a60ccc99a38b7f" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:25:25.894995 systemd-networkd[1482]: cali0a447bd71c2: Link UP Oct 27 08:25:25.897691 systemd-networkd[1482]: cali0a447bd71c2: Gained carrier Oct 27 08:25:25.923777 containerd[1614]: 2025-10-27 08:25:25.562 [INFO][4456] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999.9.9--k--8ed45c9b51-k8s-csi--node--driver--jn8zm-eth0 csi-node-driver- calico-system 9d02f52d-c9e1-4d0e-b6df-042109e24c03 729 0 2025-10-27 08:24:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-9999.9.9-k-8ed45c9b51 csi-node-driver-jn8zm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0a447bd71c2 [] [] }} ContainerID="b3cbe2556e3bace2f2f121cb8992475c799e531be8b5c2fa9fe7dcc6700ea819" Namespace="calico-system" Pod="csi-node-driver-jn8zm" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-csi--node--driver--jn8zm-" Oct 27 08:25:25.923777 containerd[1614]: 2025-10-27 08:25:25.569 [INFO][4456] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3cbe2556e3bace2f2f121cb8992475c799e531be8b5c2fa9fe7dcc6700ea819" Namespace="calico-system" Pod="csi-node-driver-jn8zm" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-csi--node--driver--jn8zm-eth0" Oct 27 08:25:25.923777 containerd[1614]: 2025-10-27 08:25:25.648 [INFO][4482] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3cbe2556e3bace2f2f121cb8992475c799e531be8b5c2fa9fe7dcc6700ea819" HandleID="k8s-pod-network.b3cbe2556e3bace2f2f121cb8992475c799e531be8b5c2fa9fe7dcc6700ea819" Workload="ci--9999.9.9--k--8ed45c9b51-k8s-csi--node--driver--jn8zm-eth0" Oct 27 08:25:25.923777 containerd[1614]: 2025-10-27 08:25:25.650 [INFO][4482] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b3cbe2556e3bace2f2f121cb8992475c799e531be8b5c2fa9fe7dcc6700ea819" HandleID="k8s-pod-network.b3cbe2556e3bace2f2f121cb8992475c799e531be8b5c2fa9fe7dcc6700ea819" Workload="ci--9999.9.9--k--8ed45c9b51-k8s-csi--node--driver--jn8zm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5bc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-9999.9.9-k-8ed45c9b51", "pod":"csi-node-driver-jn8zm", "timestamp":"2025-10-27 08:25:25.648048286 +0000 UTC"}, Hostname:"ci-9999.9.9-k-8ed45c9b51", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:25:25.923777 containerd[1614]: 2025-10-27 08:25:25.650 [INFO][4482] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:25:25.923777 containerd[1614]: 2025-10-27 08:25:25.701 [INFO][4482] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:25:25.923777 containerd[1614]: 2025-10-27 08:25:25.702 [INFO][4482] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999.9.9-k-8ed45c9b51' Oct 27 08:25:25.923777 containerd[1614]: 2025-10-27 08:25:25.760 [INFO][4482] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b3cbe2556e3bace2f2f121cb8992475c799e531be8b5c2fa9fe7dcc6700ea819" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:25.923777 containerd[1614]: 2025-10-27 08:25:25.772 [INFO][4482] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:25.923777 containerd[1614]: 2025-10-27 08:25:25.789 [INFO][4482] ipam/ipam.go 511: Trying affinity for 192.168.113.192/26 host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:25.923777 containerd[1614]: 2025-10-27 08:25:25.799 [INFO][4482] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.192/26 host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:25.923777 containerd[1614]: 2025-10-27 08:25:25.809 [INFO][4482] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.192/26 host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:25.923777 containerd[1614]: 2025-10-27 08:25:25.809 [INFO][4482] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.192/26 handle="k8s-pod-network.b3cbe2556e3bace2f2f121cb8992475c799e531be8b5c2fa9fe7dcc6700ea819" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:25.923777 containerd[1614]: 2025-10-27 08:25:25.820 [INFO][4482] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b3cbe2556e3bace2f2f121cb8992475c799e531be8b5c2fa9fe7dcc6700ea819 Oct 27 08:25:25.923777 containerd[1614]: 2025-10-27 08:25:25.839 [INFO][4482] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.192/26 handle="k8s-pod-network.b3cbe2556e3bace2f2f121cb8992475c799e531be8b5c2fa9fe7dcc6700ea819" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:25.923777 containerd[1614]: 2025-10-27 08:25:25.869 [INFO][4482] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.199/26] block=192.168.113.192/26 handle="k8s-pod-network.b3cbe2556e3bace2f2f121cb8992475c799e531be8b5c2fa9fe7dcc6700ea819" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:25.923777 containerd[1614]: 2025-10-27 08:25:25.869 [INFO][4482] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.199/26] handle="k8s-pod-network.b3cbe2556e3bace2f2f121cb8992475c799e531be8b5c2fa9fe7dcc6700ea819" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:25.923777 containerd[1614]: 2025-10-27 08:25:25.869 [INFO][4482] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:25:25.923777 containerd[1614]: 2025-10-27 08:25:25.869 [INFO][4482] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.199/26] IPv6=[] ContainerID="b3cbe2556e3bace2f2f121cb8992475c799e531be8b5c2fa9fe7dcc6700ea819" HandleID="k8s-pod-network.b3cbe2556e3bace2f2f121cb8992475c799e531be8b5c2fa9fe7dcc6700ea819" Workload="ci--9999.9.9--k--8ed45c9b51-k8s-csi--node--driver--jn8zm-eth0" Oct 27 08:25:25.924935 containerd[1614]: 2025-10-27 08:25:25.883 [INFO][4456] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3cbe2556e3bace2f2f121cb8992475c799e531be8b5c2fa9fe7dcc6700ea819" Namespace="calico-system" Pod="csi-node-driver-jn8zm" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-csi--node--driver--jn8zm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--k--8ed45c9b51-k8s-csi--node--driver--jn8zm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9d02f52d-c9e1-4d0e-b6df-042109e24c03", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 24, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-k-8ed45c9b51", ContainerID:"", Pod:"csi-node-driver-jn8zm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.113.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0a447bd71c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:25:25.924935 containerd[1614]: 2025-10-27 08:25:25.883 [INFO][4456] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.199/32] ContainerID="b3cbe2556e3bace2f2f121cb8992475c799e531be8b5c2fa9fe7dcc6700ea819" Namespace="calico-system" Pod="csi-node-driver-jn8zm" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-csi--node--driver--jn8zm-eth0" Oct 27 08:25:25.924935 containerd[1614]: 2025-10-27 08:25:25.884 [INFO][4456] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0a447bd71c2 ContainerID="b3cbe2556e3bace2f2f121cb8992475c799e531be8b5c2fa9fe7dcc6700ea819" Namespace="calico-system" Pod="csi-node-driver-jn8zm" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-csi--node--driver--jn8zm-eth0" Oct 27 08:25:25.924935 containerd[1614]: 2025-10-27 08:25:25.899 [INFO][4456] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3cbe2556e3bace2f2f121cb8992475c799e531be8b5c2fa9fe7dcc6700ea819" Namespace="calico-system" Pod="csi-node-driver-jn8zm" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-csi--node--driver--jn8zm-eth0" Oct 27 08:25:25.924935 containerd[1614]: 2025-10-27 08:25:25.901 [INFO][4456] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3cbe2556e3bace2f2f121cb8992475c799e531be8b5c2fa9fe7dcc6700ea819" Namespace="calico-system" Pod="csi-node-driver-jn8zm" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-csi--node--driver--jn8zm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--k--8ed45c9b51-k8s-csi--node--driver--jn8zm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9d02f52d-c9e1-4d0e-b6df-042109e24c03", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 24, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-k-8ed45c9b51", ContainerID:"b3cbe2556e3bace2f2f121cb8992475c799e531be8b5c2fa9fe7dcc6700ea819", Pod:"csi-node-driver-jn8zm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.113.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0a447bd71c2", MAC:"ae:2f:51:22:a0:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:25:25.924935 containerd[1614]: 2025-10-27 08:25:25.917 [INFO][4456] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3cbe2556e3bace2f2f121cb8992475c799e531be8b5c2fa9fe7dcc6700ea819" Namespace="calico-system" Pod="csi-node-driver-jn8zm" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-csi--node--driver--jn8zm-eth0" Oct 27 08:25:25.927779 systemd[1]: Started cri-containerd-94113bb05241474d84cf952c6e9cefb7bcae1c0f5f859530af0a977536b01214.scope - libcontainer container 94113bb05241474d84cf952c6e9cefb7bcae1c0f5f859530af0a977536b01214. Oct 27 08:25:25.972430 containerd[1614]: time="2025-10-27T08:25:25.970835500Z" level=info msg="connecting to shim b3cbe2556e3bace2f2f121cb8992475c799e531be8b5c2fa9fe7dcc6700ea819" address="unix:///run/containerd/s/8b85fa8c9e09c3c93cb1ceacc8f8185391b75d89416393599b17fd3e618b1881" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:25:26.052237 containerd[1614]: time="2025-10-27T08:25:26.051914621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mg2vx,Uid:8378213a-6035-4955-9239-ea2d03ab7a24,Namespace:kube-system,Attempt:0,} returns sandbox id \"94113bb05241474d84cf952c6e9cefb7bcae1c0f5f859530af0a977536b01214\"" Oct 27 08:25:26.060558 kubelet[2775]: E1027 08:25:26.057792 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:25:26.064810 systemd[1]: Started cri-containerd-b3cbe2556e3bace2f2f121cb8992475c799e531be8b5c2fa9fe7dcc6700ea819.scope - libcontainer container b3cbe2556e3bace2f2f121cb8992475c799e531be8b5c2fa9fe7dcc6700ea819. Oct 27 08:25:26.070153 containerd[1614]: time="2025-10-27T08:25:26.070007019Z" level=info msg="CreateContainer within sandbox \"94113bb05241474d84cf952c6e9cefb7bcae1c0f5f859530af0a977536b01214\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 27 08:25:26.083127 containerd[1614]: time="2025-10-27T08:25:26.083081998Z" level=info msg="Container 5e06fa48f61481a615774e2cadc54dc229eded7bcce4b6a9dfe80bc84a4c87c3: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:25:26.091087 containerd[1614]: time="2025-10-27T08:25:26.091034486Z" level=info msg="CreateContainer within sandbox \"94113bb05241474d84cf952c6e9cefb7bcae1c0f5f859530af0a977536b01214\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5e06fa48f61481a615774e2cadc54dc229eded7bcce4b6a9dfe80bc84a4c87c3\"" Oct 27 08:25:26.092054 containerd[1614]: time="2025-10-27T08:25:26.092026040Z" level=info msg="StartContainer for \"5e06fa48f61481a615774e2cadc54dc229eded7bcce4b6a9dfe80bc84a4c87c3\"" Oct 27 08:25:26.095551 containerd[1614]: time="2025-10-27T08:25:26.094456565Z" level=info msg="connecting to shim 5e06fa48f61481a615774e2cadc54dc229eded7bcce4b6a9dfe80bc84a4c87c3" address="unix:///run/containerd/s/108eb0212e24910737f3e964a73f6cc6ffa04978ece5556562a60ccc99a38b7f" protocol=ttrpc version=3 Oct 27 08:25:26.136157 systemd[1]: Started cri-containerd-5e06fa48f61481a615774e2cadc54dc229eded7bcce4b6a9dfe80bc84a4c87c3.scope - libcontainer container 5e06fa48f61481a615774e2cadc54dc229eded7bcce4b6a9dfe80bc84a4c87c3. Oct 27 08:25:26.147774 systemd-networkd[1482]: cali7165df9072b: Gained IPv6LL Oct 27 08:25:26.202443 containerd[1614]: time="2025-10-27T08:25:26.202330491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jn8zm,Uid:9d02f52d-c9e1-4d0e-b6df-042109e24c03,Namespace:calico-system,Attempt:0,} returns sandbox id \"b3cbe2556e3bace2f2f121cb8992475c799e531be8b5c2fa9fe7dcc6700ea819\"" Oct 27 08:25:26.205319 containerd[1614]: time="2025-10-27T08:25:26.205195513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 08:25:26.219472 containerd[1614]: time="2025-10-27T08:25:26.219410383Z" level=info msg="StartContainer for \"5e06fa48f61481a615774e2cadc54dc229eded7bcce4b6a9dfe80bc84a4c87c3\" returns successfully" Oct 27 08:25:26.463857 containerd[1614]: time="2025-10-27T08:25:26.463815001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-664fbc97f9-cb2jp,Uid:df40acd6-6199-4dea-8b25-0040183349ca,Namespace:calico-system,Attempt:0,}" Oct 27 08:25:26.559147 containerd[1614]: time="2025-10-27T08:25:26.558829308Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:25:26.560266 containerd[1614]: time="2025-10-27T08:25:26.560186329Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 08:25:26.560606 containerd[1614]: time="2025-10-27T08:25:26.560295177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 27 08:25:26.560996 kubelet[2775]: E1027 08:25:26.560470 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:25:26.560996 kubelet[2775]: E1027 08:25:26.560796 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:25:26.561451 kubelet[2775]: E1027 08:25:26.561353 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bm584,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jn8zm_calico-system(9d02f52d-c9e1-4d0e-b6df-042109e24c03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 08:25:26.564610 containerd[1614]: time="2025-10-27T08:25:26.564317499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 08:25:26.582484 systemd-networkd[1482]: vxlan.calico: Link UP Oct 27 08:25:26.582496 systemd-networkd[1482]: vxlan.calico: Gained carrier Oct 27 08:25:26.745268 systemd-networkd[1482]: calic3ec6ec7754: Link UP Oct 27 08:25:26.749571 systemd-networkd[1482]: calic3ec6ec7754: Gained carrier Oct 27 08:25:26.773945 containerd[1614]: 2025-10-27 08:25:26.549 [INFO][4669] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999.9.9--k--8ed45c9b51-k8s-calico--kube--controllers--664fbc97f9--cb2jp-eth0 calico-kube-controllers-664fbc97f9- calico-system df40acd6-6199-4dea-8b25-0040183349ca 853 0 2025-10-27 08:24:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:664fbc97f9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-9999.9.9-k-8ed45c9b51 calico-kube-controllers-664fbc97f9-cb2jp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic3ec6ec7754 [] [] }} ContainerID="01536a5b4649cbf60bba28706d985dc6573f43aea8f516d56849d3043725fe8d" Namespace="calico-system" Pod="calico-kube-controllers-664fbc97f9-cb2jp" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-calico--kube--controllers--664fbc97f9--cb2jp-" Oct 27 08:25:26.773945 containerd[1614]: 2025-10-27 08:25:26.549 [INFO][4669] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01536a5b4649cbf60bba28706d985dc6573f43aea8f516d56849d3043725fe8d" Namespace="calico-system" Pod="calico-kube-controllers-664fbc97f9-cb2jp" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-calico--kube--controllers--664fbc97f9--cb2jp-eth0" Oct 27 08:25:26.773945 containerd[1614]: 2025-10-27 08:25:26.651 [INFO][4701] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01536a5b4649cbf60bba28706d985dc6573f43aea8f516d56849d3043725fe8d" HandleID="k8s-pod-network.01536a5b4649cbf60bba28706d985dc6573f43aea8f516d56849d3043725fe8d" Workload="ci--9999.9.9--k--8ed45c9b51-k8s-calico--kube--controllers--664fbc97f9--cb2jp-eth0" Oct 27 08:25:26.773945 containerd[1614]: 2025-10-27 08:25:26.652 [INFO][4701] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="01536a5b4649cbf60bba28706d985dc6573f43aea8f516d56849d3043725fe8d" HandleID="k8s-pod-network.01536a5b4649cbf60bba28706d985dc6573f43aea8f516d56849d3043725fe8d" Workload="ci--9999.9.9--k--8ed45c9b51-k8s-calico--kube--controllers--664fbc97f9--cb2jp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000395dc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-9999.9.9-k-8ed45c9b51", "pod":"calico-kube-controllers-664fbc97f9-cb2jp", "timestamp":"2025-10-27 08:25:26.651828988 +0000 UTC"}, Hostname:"ci-9999.9.9-k-8ed45c9b51", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:25:26.773945 containerd[1614]: 2025-10-27 08:25:26.652 [INFO][4701] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:25:26.773945 containerd[1614]: 2025-10-27 08:25:26.652 [INFO][4701] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:25:26.773945 containerd[1614]: 2025-10-27 08:25:26.652 [INFO][4701] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999.9.9-k-8ed45c9b51' Oct 27 08:25:26.773945 containerd[1614]: 2025-10-27 08:25:26.668 [INFO][4701] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01536a5b4649cbf60bba28706d985dc6573f43aea8f516d56849d3043725fe8d" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:26.773945 containerd[1614]: 2025-10-27 08:25:26.701 [INFO][4701] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:26.773945 containerd[1614]: 2025-10-27 08:25:26.708 [INFO][4701] ipam/ipam.go 511: Trying affinity for 192.168.113.192/26 host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:26.773945 containerd[1614]: 2025-10-27 08:25:26.711 [INFO][4701] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.192/26 host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:26.773945 containerd[1614]: 2025-10-27 08:25:26.714 [INFO][4701] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.192/26 host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:26.773945 containerd[1614]: 2025-10-27 08:25:26.714 [INFO][4701] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.192/26 handle="k8s-pod-network.01536a5b4649cbf60bba28706d985dc6573f43aea8f516d56849d3043725fe8d" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:26.773945 containerd[1614]: 2025-10-27 08:25:26.717 [INFO][4701] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.01536a5b4649cbf60bba28706d985dc6573f43aea8f516d56849d3043725fe8d Oct 27 08:25:26.773945 containerd[1614]: 2025-10-27 08:25:26.723 [INFO][4701] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.192/26 handle="k8s-pod-network.01536a5b4649cbf60bba28706d985dc6573f43aea8f516d56849d3043725fe8d" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:26.773945 containerd[1614]: 2025-10-27 08:25:26.736 [INFO][4701] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.200/26] block=192.168.113.192/26 handle="k8s-pod-network.01536a5b4649cbf60bba28706d985dc6573f43aea8f516d56849d3043725fe8d" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:26.773945 containerd[1614]: 2025-10-27 08:25:26.736 [INFO][4701] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.200/26] handle="k8s-pod-network.01536a5b4649cbf60bba28706d985dc6573f43aea8f516d56849d3043725fe8d" host="ci-9999.9.9-k-8ed45c9b51" Oct 27 08:25:26.773945 containerd[1614]: 2025-10-27 08:25:26.736 [INFO][4701] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:25:26.773945 containerd[1614]: 2025-10-27 08:25:26.736 [INFO][4701] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.200/26] IPv6=[] ContainerID="01536a5b4649cbf60bba28706d985dc6573f43aea8f516d56849d3043725fe8d" HandleID="k8s-pod-network.01536a5b4649cbf60bba28706d985dc6573f43aea8f516d56849d3043725fe8d" Workload="ci--9999.9.9--k--8ed45c9b51-k8s-calico--kube--controllers--664fbc97f9--cb2jp-eth0" Oct 27 08:25:26.774686 containerd[1614]: 2025-10-27 08:25:26.740 [INFO][4669] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01536a5b4649cbf60bba28706d985dc6573f43aea8f516d56849d3043725fe8d" Namespace="calico-system" Pod="calico-kube-controllers-664fbc97f9-cb2jp" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-calico--kube--controllers--664fbc97f9--cb2jp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--k--8ed45c9b51-k8s-calico--kube--controllers--664fbc97f9--cb2jp-eth0", GenerateName:"calico-kube-controllers-664fbc97f9-", Namespace:"calico-system", SelfLink:"", UID:"df40acd6-6199-4dea-8b25-0040183349ca", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 24, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"664fbc97f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-k-8ed45c9b51", ContainerID:"", Pod:"calico-kube-controllers-664fbc97f9-cb2jp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.113.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic3ec6ec7754", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:25:26.774686 containerd[1614]: 2025-10-27 08:25:26.740 [INFO][4669] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.200/32] ContainerID="01536a5b4649cbf60bba28706d985dc6573f43aea8f516d56849d3043725fe8d" Namespace="calico-system" Pod="calico-kube-controllers-664fbc97f9-cb2jp" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-calico--kube--controllers--664fbc97f9--cb2jp-eth0" Oct 27 08:25:26.774686 containerd[1614]: 2025-10-27 08:25:26.740 [INFO][4669] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic3ec6ec7754 ContainerID="01536a5b4649cbf60bba28706d985dc6573f43aea8f516d56849d3043725fe8d" Namespace="calico-system" Pod="calico-kube-controllers-664fbc97f9-cb2jp" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-calico--kube--controllers--664fbc97f9--cb2jp-eth0" Oct 27 08:25:26.774686 containerd[1614]: 2025-10-27 08:25:26.743 [INFO][4669] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01536a5b4649cbf60bba28706d985dc6573f43aea8f516d56849d3043725fe8d" Namespace="calico-system" Pod="calico-kube-controllers-664fbc97f9-cb2jp" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-calico--kube--controllers--664fbc97f9--cb2jp-eth0" Oct 27 08:25:26.774686 containerd[1614]: 2025-10-27 08:25:26.744 [INFO][4669] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01536a5b4649cbf60bba28706d985dc6573f43aea8f516d56849d3043725fe8d" Namespace="calico-system" Pod="calico-kube-controllers-664fbc97f9-cb2jp" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-calico--kube--controllers--664fbc97f9--cb2jp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999.9.9--k--8ed45c9b51-k8s-calico--kube--controllers--664fbc97f9--cb2jp-eth0", GenerateName:"calico-kube-controllers-664fbc97f9-", Namespace:"calico-system", SelfLink:"", UID:"df40acd6-6199-4dea-8b25-0040183349ca", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 24, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"664fbc97f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999.9.9-k-8ed45c9b51", ContainerID:"01536a5b4649cbf60bba28706d985dc6573f43aea8f516d56849d3043725fe8d", Pod:"calico-kube-controllers-664fbc97f9-cb2jp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.113.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic3ec6ec7754", MAC:"9e:ec:71:d9:b7:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:25:26.774686 containerd[1614]: 2025-10-27 08:25:26.767 [INFO][4669] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01536a5b4649cbf60bba28706d985dc6573f43aea8f516d56849d3043725fe8d" Namespace="calico-system" Pod="calico-kube-controllers-664fbc97f9-cb2jp" WorkloadEndpoint="ci--9999.9.9--k--8ed45c9b51-k8s-calico--kube--controllers--664fbc97f9--cb2jp-eth0" Oct 27 08:25:26.786787 systemd-networkd[1482]: calie1b5e5ece42: Gained IPv6LL Oct 27 08:25:26.793358 kubelet[2775]: E1027 08:25:26.793118 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:25:26.811184 containerd[1614]: time="2025-10-27T08:25:26.811128186Z" level=info msg="connecting to shim 01536a5b4649cbf60bba28706d985dc6573f43aea8f516d56849d3043725fe8d" address="unix:///run/containerd/s/def4d922b33c1adbe1deaa432855aabe696dc5b2b3cf7eb02f0e811aa0b0b7f3" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:25:26.887679 kubelet[2775]: I1027 08:25:26.886028 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mg2vx" podStartSLOduration=47.885998545 podStartE2EDuration="47.885998545s" podCreationTimestamp="2025-10-27 08:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:25:26.848082602 +0000 UTC m=+54.576640426" watchObservedRunningTime="2025-10-27 08:25:26.885998545 +0000 UTC m=+54.614556370" Oct 27 08:25:26.902049 systemd[1]: Started cri-containerd-01536a5b4649cbf60bba28706d985dc6573f43aea8f516d56849d3043725fe8d.scope - libcontainer container 01536a5b4649cbf60bba28706d985dc6573f43aea8f516d56849d3043725fe8d. Oct 27 08:25:26.905391 containerd[1614]: time="2025-10-27T08:25:26.905319846Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:25:26.906311 containerd[1614]: time="2025-10-27T08:25:26.906244044Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 08:25:26.907608 containerd[1614]: time="2025-10-27T08:25:26.906260890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 27 08:25:26.907677 kubelet[2775]: E1027 08:25:26.906677 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:25:26.907677 kubelet[2775]: E1027 08:25:26.906741 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:25:26.907677 kubelet[2775]: E1027 08:25:26.906891 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bm584,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jn8zm_calico-system(9d02f52d-c9e1-4d0e-b6df-042109e24c03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 08:25:26.908399 kubelet[2775]: E1027 08:25:26.908352 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jn8zm" podUID="9d02f52d-c9e1-4d0e-b6df-042109e24c03" Oct 27 08:25:26.981605 systemd-networkd[1482]: cali0a447bd71c2: Gained IPv6LL Oct 27 08:25:26.989692 containerd[1614]: time="2025-10-27T08:25:26.988495459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-664fbc97f9-cb2jp,Uid:df40acd6-6199-4dea-8b25-0040183349ca,Namespace:calico-system,Attempt:0,} returns sandbox id \"01536a5b4649cbf60bba28706d985dc6573f43aea8f516d56849d3043725fe8d\"" Oct 27 08:25:26.996456 containerd[1614]: time="2025-10-27T08:25:26.996185302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 08:25:27.334653 containerd[1614]: time="2025-10-27T08:25:27.334455352Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:25:27.336406 containerd[1614]: time="2025-10-27T08:25:27.336281503Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 08:25:27.336406 containerd[1614]: time="2025-10-27T08:25:27.336352738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 27 08:25:27.337166 kubelet[2775]: E1027 08:25:27.336804 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:25:27.337166 kubelet[2775]: E1027 08:25:27.336858 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:25:27.337166 kubelet[2775]: E1027 08:25:27.337072 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tbz25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-664fbc97f9-cb2jp_calico-system(df40acd6-6199-4dea-8b25-0040183349ca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 08:25:27.338795 kubelet[2775]: E1027 08:25:27.338715 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-664fbc97f9-cb2jp" podUID="df40acd6-6199-4dea-8b25-0040183349ca" Oct 27 08:25:27.682962 systemd-networkd[1482]: vxlan.calico: Gained IPv6LL Oct 27 08:25:27.792641 kubelet[2775]: E1027 08:25:27.792601 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:25:27.794994 kubelet[2775]: E1027 08:25:27.794928 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-664fbc97f9-cb2jp" podUID="df40acd6-6199-4dea-8b25-0040183349ca" Oct 27 08:25:27.796144 kubelet[2775]: E1027 08:25:27.796106 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jn8zm" podUID="9d02f52d-c9e1-4d0e-b6df-042109e24c03" Oct 27 08:25:28.003422 systemd-networkd[1482]: calic3ec6ec7754: Gained IPv6LL Oct 27 08:25:28.794621 kubelet[2775]: E1027 08:25:28.794578 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:25:28.795296 kubelet[2775]: E1027 08:25:28.795159 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-664fbc97f9-cb2jp" podUID="df40acd6-6199-4dea-8b25-0040183349ca" Oct 27 08:25:33.460503 containerd[1614]: time="2025-10-27T08:25:33.460362765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 08:25:33.779439 containerd[1614]: time="2025-10-27T08:25:33.779278547Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:25:33.780180 containerd[1614]: time="2025-10-27T08:25:33.780068346Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 08:25:33.780274 containerd[1614]: time="2025-10-27T08:25:33.780212849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 27 08:25:33.780642 kubelet[2775]: E1027 08:25:33.780575 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:25:33.781606 kubelet[2775]: E1027 08:25:33.780659 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:25:33.781606 kubelet[2775]: E1027 08:25:33.781110 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fe777ef95cb44bc29c73f3d9909f5975,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6rm6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b7d4c49f6-pwfq6_calico-system(d9ea689c-020a-4c9b-8879-a266b7c116e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 08:25:33.785405 containerd[1614]: time="2025-10-27T08:25:33.785363063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 08:25:34.113114 containerd[1614]: time="2025-10-27T08:25:34.112640442Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:25:34.113647 containerd[1614]: time="2025-10-27T08:25:34.113609275Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 08:25:34.113737 containerd[1614]: time="2025-10-27T08:25:34.113717144Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 27 08:25:34.114063 kubelet[2775]: E1027 08:25:34.114007 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:25:34.114120 kubelet[2775]: E1027 08:25:34.114073 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:25:34.114604 kubelet[2775]: E1027 08:25:34.114282 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6rm6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b7d4c49f6-pwfq6_calico-system(d9ea689c-020a-4c9b-8879-a266b7c116e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 08:25:34.115601 kubelet[2775]: E1027 08:25:34.115556 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b7d4c49f6-pwfq6" podUID="d9ea689c-020a-4c9b-8879-a266b7c116e3" Oct 27 08:25:34.257890 systemd[1]: Started sshd@7-143.198.224.48:22-139.178.89.65:41478.service - OpenSSH per-connection server daemon (139.178.89.65:41478). Oct 27 08:25:34.390469 sshd[4847]: Accepted publickey for core from 139.178.89.65 port 41478 ssh2: RSA SHA256:rxa87oi8ZZqMD8URaMdjWEem69/UDQnMWUTPMulZcos Oct 27 08:25:34.393484 sshd-session[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:25:34.400638 systemd-logind[1572]: New session 8 of user core. Oct 27 08:25:34.408812 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 27 08:25:34.922701 sshd[4850]: Connection closed by 139.178.89.65 port 41478 Oct 27 08:25:34.923869 sshd-session[4847]: pam_unix(sshd:session): session closed for user core Oct 27 08:25:34.933268 systemd[1]: sshd@7-143.198.224.48:22-139.178.89.65:41478.service: Deactivated successfully. Oct 27 08:25:34.939379 systemd[1]: session-8.scope: Deactivated successfully. Oct 27 08:25:34.941378 systemd-logind[1572]: Session 8 logged out. Waiting for processes to exit. Oct 27 08:25:34.944543 systemd-logind[1572]: Removed session 8. Oct 27 08:25:39.461918 containerd[1614]: time="2025-10-27T08:25:39.461845371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:25:39.944461 systemd[1]: Started sshd@8-143.198.224.48:22-139.178.89.65:34654.service - OpenSSH per-connection server daemon (139.178.89.65:34654). Oct 27 08:25:40.036423 sshd[4875]: Accepted publickey for core from 139.178.89.65 port 34654 ssh2: RSA SHA256:rxa87oi8ZZqMD8URaMdjWEem69/UDQnMWUTPMulZcos Oct 27 08:25:40.040040 sshd-session[4875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:25:40.047474 systemd-logind[1572]: New session 9 of user core. Oct 27 08:25:40.056946 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 27 08:25:40.222369 sshd[4878]: Connection closed by 139.178.89.65 port 34654 Oct 27 08:25:40.223164 sshd-session[4875]: pam_unix(sshd:session): session closed for user core Oct 27 08:25:40.228477 systemd[1]: sshd@8-143.198.224.48:22-139.178.89.65:34654.service: Deactivated successfully. Oct 27 08:25:40.229866 containerd[1614]: time="2025-10-27T08:25:40.229826070Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:25:40.230735 containerd[1614]: time="2025-10-27T08:25:40.230697473Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:25:40.231038 containerd[1614]: time="2025-10-27T08:25:40.230887751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:25:40.231297 kubelet[2775]: E1027 08:25:40.231244 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:25:40.231683 kubelet[2775]: E1027 08:25:40.231320 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:25:40.232117 kubelet[2775]: E1027 08:25:40.231586 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mb4zh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9d8b896f8-4w9rr_calico-apiserver(0ff60fd1-1efa-4af7-a609-59d81e9c7a0f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:25:40.234571 kubelet[2775]: E1027 08:25:40.233778 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d8b896f8-4w9rr" podUID="0ff60fd1-1efa-4af7-a609-59d81e9c7a0f" Oct 27 08:25:40.235912 systemd[1]: session-9.scope: Deactivated successfully. Oct 27 08:25:40.238609 systemd-logind[1572]: Session 9 logged out. Waiting for processes to exit. Oct 27 08:25:40.242844 systemd-logind[1572]: Removed session 9. Oct 27 08:25:40.461735 containerd[1614]: time="2025-10-27T08:25:40.461682121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:25:40.797334 containerd[1614]: time="2025-10-27T08:25:40.797275334Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:25:40.798074 containerd[1614]: time="2025-10-27T08:25:40.798028667Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:25:40.798190 containerd[1614]: time="2025-10-27T08:25:40.798119169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:25:40.798420 kubelet[2775]: E1027 08:25:40.798387 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:25:40.798564 kubelet[2775]: E1027 08:25:40.798547 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:25:40.800716 containerd[1614]: time="2025-10-27T08:25:40.798937879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 08:25:40.800841 kubelet[2775]: E1027 08:25:40.799132 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9slwc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9d8b896f8-z6l9g_calico-apiserver(606a9dd0-b52b-437b-ae5f-d5d2e13b6421): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:25:40.801411 kubelet[2775]: E1027 08:25:40.801352 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d8b896f8-z6l9g" podUID="606a9dd0-b52b-437b-ae5f-d5d2e13b6421" Oct 27 08:25:41.114764 containerd[1614]: time="2025-10-27T08:25:41.114632496Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:25:41.116537 containerd[1614]: time="2025-10-27T08:25:41.115973402Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 08:25:41.116938 containerd[1614]: time="2025-10-27T08:25:41.116044758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 27 08:25:41.117233 kubelet[2775]: E1027 08:25:41.117172 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:25:41.117730 kubelet[2775]: E1027 08:25:41.117690 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:25:41.118271 kubelet[2775]: E1027 08:25:41.118159 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vkflg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-x85mb_calico-system(3d26e3cf-e6a3-4346-be05-bc637815bb23): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 08:25:41.119673 kubelet[2775]: E1027 08:25:41.119631 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x85mb" podUID="3d26e3cf-e6a3-4346-be05-bc637815bb23" Oct 27 08:25:42.464447 containerd[1614]: time="2025-10-27T08:25:42.463906815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 08:25:43.304184 containerd[1614]: time="2025-10-27T08:25:43.304126938Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:25:43.305077 containerd[1614]: time="2025-10-27T08:25:43.305016333Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 08:25:43.305413 containerd[1614]: time="2025-10-27T08:25:43.305078835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 27 08:25:43.305499 kubelet[2775]: E1027 08:25:43.305403 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:25:43.305499 kubelet[2775]: E1027 08:25:43.305472 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:25:43.306210 kubelet[2775]: E1027 08:25:43.305674 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bm584,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jn8zm_calico-system(9d02f52d-c9e1-4d0e-b6df-042109e24c03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 08:25:43.308382 containerd[1614]: time="2025-10-27T08:25:43.308343166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 08:25:43.640318 containerd[1614]: time="2025-10-27T08:25:43.640061232Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:25:43.641383 containerd[1614]: time="2025-10-27T08:25:43.641330990Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 08:25:43.641493 containerd[1614]: time="2025-10-27T08:25:43.641443595Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 27 08:25:43.641743 kubelet[2775]: E1027 08:25:43.641700 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:25:43.641993 kubelet[2775]: E1027 08:25:43.641935 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:25:43.643212 containerd[1614]: time="2025-10-27T08:25:43.642480423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 08:25:43.643334 kubelet[2775]: E1027 08:25:43.643133 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bm584,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jn8zm_calico-system(9d02f52d-c9e1-4d0e-b6df-042109e24c03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 08:25:43.645173 kubelet[2775]: E1027 08:25:43.644704 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jn8zm" podUID="9d02f52d-c9e1-4d0e-b6df-042109e24c03" Oct 27 08:25:43.995477 containerd[1614]: time="2025-10-27T08:25:43.995414648Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:25:43.996828 containerd[1614]: time="2025-10-27T08:25:43.996765661Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 08:25:43.997162 containerd[1614]: time="2025-10-27T08:25:43.996928205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 27 08:25:43.997275 kubelet[2775]: E1027 08:25:43.997200 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:25:43.997275 kubelet[2775]: E1027 08:25:43.997258 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:25:43.997722 kubelet[2775]: E1027 08:25:43.997450 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tbz25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-664fbc97f9-cb2jp_calico-system(df40acd6-6199-4dea-8b25-0040183349ca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 08:25:43.999256 kubelet[2775]: E1027 08:25:43.999204 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-664fbc97f9-cb2jp" podUID="df40acd6-6199-4dea-8b25-0040183349ca" Oct 27 08:25:45.244575 systemd[1]: Started sshd@9-143.198.224.48:22-139.178.89.65:34660.service - OpenSSH per-connection server daemon (139.178.89.65:34660). Oct 27 08:25:45.331389 sshd[4890]: Accepted publickey for core from 139.178.89.65 port 34660 ssh2: RSA SHA256:rxa87oi8ZZqMD8URaMdjWEem69/UDQnMWUTPMulZcos Oct 27 08:25:45.333701 sshd-session[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:25:45.339773 systemd-logind[1572]: New session 10 of user core. Oct 27 08:25:45.346817 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 27 08:25:45.461625 kubelet[2775]: E1027 08:25:45.461547 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b7d4c49f6-pwfq6" podUID="d9ea689c-020a-4c9b-8879-a266b7c116e3" Oct 27 08:25:45.540637 sshd[4894]: Connection closed by 139.178.89.65 port 34660 Oct 27 08:25:45.540705 sshd-session[4890]: pam_unix(sshd:session): session closed for user core Oct 27 08:25:45.553194 systemd[1]: sshd@9-143.198.224.48:22-139.178.89.65:34660.service: Deactivated successfully. Oct 27 08:25:45.556261 systemd[1]: session-10.scope: Deactivated successfully. Oct 27 08:25:45.558408 systemd-logind[1572]: Session 10 logged out. Waiting for processes to exit. Oct 27 08:25:45.561934 systemd[1]: Started sshd@10-143.198.224.48:22-139.178.89.65:34666.service - OpenSSH per-connection server daemon (139.178.89.65:34666). Oct 27 08:25:45.563663 systemd-logind[1572]: Removed session 10. Oct 27 08:25:45.660089 sshd[4907]: Accepted publickey for core from 139.178.89.65 port 34666 ssh2: RSA SHA256:rxa87oi8ZZqMD8URaMdjWEem69/UDQnMWUTPMulZcos Oct 27 08:25:45.662587 sshd-session[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:25:45.669598 systemd-logind[1572]: New session 11 of user core. Oct 27 08:25:45.676831 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 27 08:25:45.894644 sshd[4910]: Connection closed by 139.178.89.65 port 34666 Oct 27 08:25:45.898631 sshd-session[4907]: pam_unix(sshd:session): session closed for user core Oct 27 08:25:45.913828 systemd[1]: sshd@10-143.198.224.48:22-139.178.89.65:34666.service: Deactivated successfully. Oct 27 08:25:45.918705 systemd[1]: session-11.scope: Deactivated successfully. Oct 27 08:25:45.920804 systemd-logind[1572]: Session 11 logged out. Waiting for processes to exit. Oct 27 08:25:45.927437 systemd[1]: Started sshd@11-143.198.224.48:22-139.178.89.65:34672.service - OpenSSH per-connection server daemon (139.178.89.65:34672). Oct 27 08:25:45.930452 systemd-logind[1572]: Removed session 11. Oct 27 08:25:46.046062 sshd[4920]: Accepted publickey for core from 139.178.89.65 port 34672 ssh2: RSA SHA256:rxa87oi8ZZqMD8URaMdjWEem69/UDQnMWUTPMulZcos Oct 27 08:25:46.048044 sshd-session[4920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:25:46.055450 systemd-logind[1572]: New session 12 of user core. Oct 27 08:25:46.063757 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 27 08:25:46.235884 sshd[4923]: Connection closed by 139.178.89.65 port 34672 Oct 27 08:25:46.236970 sshd-session[4920]: pam_unix(sshd:session): session closed for user core Oct 27 08:25:46.243045 systemd[1]: sshd@11-143.198.224.48:22-139.178.89.65:34672.service: Deactivated successfully. Oct 27 08:25:46.246717 systemd[1]: session-12.scope: Deactivated successfully. Oct 27 08:25:46.248205 systemd-logind[1572]: Session 12 logged out. Waiting for processes to exit. Oct 27 08:25:46.249871 systemd-logind[1572]: Removed session 12. Oct 27 08:25:50.838404 containerd[1614]: time="2025-10-27T08:25:50.838058542Z" level=info msg="TaskExit event in podsandbox handler container_id:\"797ff5e8f73494a4378f808e979b3d4ba294ad42c2660365c173a58a96df4166\" id:\"ee8b0e2d30231a0d83544fdb2cd286afdee394ec31b17beb2452b72fe188d0de\" pid:4954 exited_at:{seconds:1761553550 nanos:837651149}" Oct 27 08:25:50.842864 kubelet[2775]: E1027 08:25:50.842791 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:25:51.253836 systemd[1]: Started sshd@12-143.198.224.48:22-139.178.89.65:33120.service - OpenSSH per-connection server daemon (139.178.89.65:33120). Oct 27 08:25:51.321285 sshd[4966]: Accepted publickey for core from 139.178.89.65 port 33120 ssh2: RSA SHA256:rxa87oi8ZZqMD8URaMdjWEem69/UDQnMWUTPMulZcos Oct 27 08:25:51.323355 sshd-session[4966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:25:51.330134 systemd-logind[1572]: New session 13 of user core. Oct 27 08:25:51.335834 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 27 08:25:51.474843 sshd[4969]: Connection closed by 139.178.89.65 port 33120 Oct 27 08:25:51.475586 sshd-session[4966]: pam_unix(sshd:session): session closed for user core Oct 27 08:25:51.482418 systemd[1]: sshd@12-143.198.224.48:22-139.178.89.65:33120.service: Deactivated successfully. Oct 27 08:25:51.486084 systemd[1]: session-13.scope: Deactivated successfully. Oct 27 08:25:51.488747 systemd-logind[1572]: Session 13 logged out. Waiting for processes to exit. Oct 27 08:25:51.491748 systemd-logind[1572]: Removed session 13. Oct 27 08:25:53.461538 kubelet[2775]: E1027 08:25:53.461164 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d8b896f8-z6l9g" podUID="606a9dd0-b52b-437b-ae5f-d5d2e13b6421" Oct 27 08:25:53.462447 kubelet[2775]: E1027 08:25:53.462273 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x85mb" podUID="3d26e3cf-e6a3-4346-be05-bc637815bb23" Oct 27 08:25:54.459141 kubelet[2775]: E1027 08:25:54.458688 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:25:54.462534 kubelet[2775]: E1027 08:25:54.461709 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d8b896f8-4w9rr" podUID="0ff60fd1-1efa-4af7-a609-59d81e9c7a0f" Oct 27 08:25:56.459554 kubelet[2775]: E1027 08:25:56.458699 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:25:56.462769 kubelet[2775]: E1027 08:25:56.462707 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-664fbc97f9-cb2jp" podUID="df40acd6-6199-4dea-8b25-0040183349ca" Oct 27 08:25:56.463603 kubelet[2775]: E1027 08:25:56.463441 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jn8zm" podUID="9d02f52d-c9e1-4d0e-b6df-042109e24c03" Oct 27 08:25:56.497946 systemd[1]: Started sshd@13-143.198.224.48:22-139.178.89.65:37340.service - OpenSSH per-connection server daemon (139.178.89.65:37340). Oct 27 08:25:56.581323 sshd[4988]: Accepted publickey for core from 139.178.89.65 port 37340 ssh2: RSA SHA256:rxa87oi8ZZqMD8URaMdjWEem69/UDQnMWUTPMulZcos Oct 27 08:25:56.584692 sshd-session[4988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:25:56.596896 systemd-logind[1572]: New session 14 of user core. Oct 27 08:25:56.602735 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 27 08:25:56.768903 sshd[4992]: Connection closed by 139.178.89.65 port 37340 Oct 27 08:25:56.768608 sshd-session[4988]: pam_unix(sshd:session): session closed for user core Oct 27 08:25:56.775329 systemd[1]: sshd@13-143.198.224.48:22-139.178.89.65:37340.service: Deactivated successfully. Oct 27 08:25:56.780259 systemd[1]: session-14.scope: Deactivated successfully. Oct 27 08:25:56.785754 systemd-logind[1572]: Session 14 logged out. Waiting for processes to exit. Oct 27 08:25:56.787353 systemd-logind[1572]: Removed session 14. Oct 27 08:25:59.465680 containerd[1614]: time="2025-10-27T08:25:59.465631098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 08:25:59.805739 containerd[1614]: time="2025-10-27T08:25:59.805434000Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:25:59.806647 containerd[1614]: time="2025-10-27T08:25:59.806069879Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 08:25:59.806647 containerd[1614]: time="2025-10-27T08:25:59.806128637Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 27 08:25:59.806786 kubelet[2775]: E1027 08:25:59.806332 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:25:59.806786 kubelet[2775]: E1027 08:25:59.806384 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:25:59.806786 kubelet[2775]: E1027 08:25:59.806585 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fe777ef95cb44bc29c73f3d9909f5975,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6rm6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b7d4c49f6-pwfq6_calico-system(d9ea689c-020a-4c9b-8879-a266b7c116e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 08:25:59.810385 containerd[1614]: time="2025-10-27T08:25:59.809908597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 08:26:00.137405 containerd[1614]: time="2025-10-27T08:26:00.137072955Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:26:00.139204 containerd[1614]: time="2025-10-27T08:26:00.139133610Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 08:26:00.140045 containerd[1614]: time="2025-10-27T08:26:00.139277895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 27 08:26:00.140416 kubelet[2775]: E1027 08:26:00.140366 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:26:00.140706 kubelet[2775]: E1027 08:26:00.140431 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:26:00.141738 kubelet[2775]: E1027 08:26:00.141543 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6rm6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b7d4c49f6-pwfq6_calico-system(d9ea689c-020a-4c9b-8879-a266b7c116e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 08:26:00.142908 kubelet[2775]: E1027 08:26:00.142834 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b7d4c49f6-pwfq6" podUID="d9ea689c-020a-4c9b-8879-a266b7c116e3" Oct 27 08:26:01.793629 systemd[1]: Started sshd@14-143.198.224.48:22-139.178.89.65:37342.service - OpenSSH per-connection server daemon (139.178.89.65:37342). Oct 27 08:26:01.969542 sshd[5005]: Accepted publickey for core from 139.178.89.65 port 37342 ssh2: RSA SHA256:rxa87oi8ZZqMD8URaMdjWEem69/UDQnMWUTPMulZcos Oct 27 08:26:01.973694 sshd-session[5005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:26:01.983257 systemd-logind[1572]: New session 15 of user core. Oct 27 08:26:01.987895 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 27 08:26:02.227493 sshd[5008]: Connection closed by 139.178.89.65 port 37342 Oct 27 08:26:02.229038 sshd-session[5005]: pam_unix(sshd:session): session closed for user core Oct 27 08:26:02.236396 systemd[1]: sshd@14-143.198.224.48:22-139.178.89.65:37342.service: Deactivated successfully. Oct 27 08:26:02.242213 systemd[1]: session-15.scope: Deactivated successfully. Oct 27 08:26:02.246408 systemd-logind[1572]: Session 15 logged out. Waiting for processes to exit. Oct 27 08:26:02.248652 systemd-logind[1572]: Removed session 15. Oct 27 08:26:04.463620 containerd[1614]: time="2025-10-27T08:26:04.463540723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:26:04.878603 containerd[1614]: time="2025-10-27T08:26:04.878028632Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:26:04.879708 containerd[1614]: time="2025-10-27T08:26:04.879532498Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:26:04.879708 containerd[1614]: time="2025-10-27T08:26:04.879534590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:26:04.880405 kubelet[2775]: E1027 08:26:04.879873 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:26:04.880405 kubelet[2775]: E1027 08:26:04.879929 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:26:04.881411 containerd[1614]: time="2025-10-27T08:26:04.881168127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 08:26:04.881654 kubelet[2775]: E1027 08:26:04.881218 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9slwc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9d8b896f8-z6l9g_calico-apiserver(606a9dd0-b52b-437b-ae5f-d5d2e13b6421): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:26:04.883664 kubelet[2775]: E1027 08:26:04.883481 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d8b896f8-z6l9g" podUID="606a9dd0-b52b-437b-ae5f-d5d2e13b6421" Oct 27 08:26:05.383642 containerd[1614]: time="2025-10-27T08:26:05.383588786Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:26:05.384937 containerd[1614]: time="2025-10-27T08:26:05.384733972Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 08:26:05.384937 containerd[1614]: time="2025-10-27T08:26:05.384880285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 27 08:26:05.385790 kubelet[2775]: E1027 08:26:05.385127 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:26:05.385790 kubelet[2775]: E1027 08:26:05.385200 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:26:05.385790 kubelet[2775]: E1027 08:26:05.385407 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vkflg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-x85mb_calico-system(3d26e3cf-e6a3-4346-be05-bc637815bb23): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 08:26:05.386902 kubelet[2775]: E1027 08:26:05.386836 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x85mb" podUID="3d26e3cf-e6a3-4346-be05-bc637815bb23" Oct 27 08:26:07.247176 systemd[1]: Started sshd@15-143.198.224.48:22-139.178.89.65:43828.service - OpenSSH per-connection server daemon (139.178.89.65:43828). Oct 27 08:26:07.362883 sshd[5026]: Accepted publickey for core from 139.178.89.65 port 43828 ssh2: RSA SHA256:rxa87oi8ZZqMD8URaMdjWEem69/UDQnMWUTPMulZcos Oct 27 08:26:07.366692 sshd-session[5026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:26:07.387264 systemd-logind[1572]: New session 16 of user core. Oct 27 08:26:07.389890 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 27 08:26:07.639341 sshd[5031]: Connection closed by 139.178.89.65 port 43828 Oct 27 08:26:07.640246 sshd-session[5026]: pam_unix(sshd:session): session closed for user core Oct 27 08:26:07.654593 systemd[1]: sshd@15-143.198.224.48:22-139.178.89.65:43828.service: Deactivated successfully. Oct 27 08:26:07.657626 systemd[1]: session-16.scope: Deactivated successfully. Oct 27 08:26:07.659620 systemd-logind[1572]: Session 16 logged out. Waiting for processes to exit. Oct 27 08:26:07.664411 systemd[1]: Started sshd@16-143.198.224.48:22-139.178.89.65:43834.service - OpenSSH per-connection server daemon (139.178.89.65:43834). Oct 27 08:26:07.667581 systemd-logind[1572]: Removed session 16. Oct 27 08:26:07.750650 sshd[5043]: Accepted publickey for core from 139.178.89.65 port 43834 ssh2: RSA SHA256:rxa87oi8ZZqMD8URaMdjWEem69/UDQnMWUTPMulZcos Oct 27 08:26:07.752579 sshd-session[5043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:26:07.760060 systemd-logind[1572]: New session 17 of user core. Oct 27 08:26:07.770877 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 27 08:26:08.095809 sshd[5046]: Connection closed by 139.178.89.65 port 43834 Oct 27 08:26:08.099102 sshd-session[5043]: pam_unix(sshd:session): session closed for user core Oct 27 08:26:08.111698 systemd[1]: sshd@16-143.198.224.48:22-139.178.89.65:43834.service: Deactivated successfully. Oct 27 08:26:08.115042 systemd[1]: session-17.scope: Deactivated successfully. Oct 27 08:26:08.118330 systemd-logind[1572]: Session 17 logged out. Waiting for processes to exit. Oct 27 08:26:08.122209 systemd[1]: Started sshd@17-143.198.224.48:22-139.178.89.65:43848.service - OpenSSH per-connection server daemon (139.178.89.65:43848). Oct 27 08:26:08.127949 systemd-logind[1572]: Removed session 17. Oct 27 08:26:08.220812 sshd[5056]: Accepted publickey for core from 139.178.89.65 port 43848 ssh2: RSA SHA256:rxa87oi8ZZqMD8URaMdjWEem69/UDQnMWUTPMulZcos Oct 27 08:26:08.222644 sshd-session[5056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:26:08.229461 systemd-logind[1572]: New session 18 of user core. Oct 27 08:26:08.236980 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 27 08:26:08.463544 containerd[1614]: time="2025-10-27T08:26:08.463416290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:26:08.878768 containerd[1614]: time="2025-10-27T08:26:08.878500959Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:26:08.879718 containerd[1614]: time="2025-10-27T08:26:08.879641340Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:26:08.880169 containerd[1614]: time="2025-10-27T08:26:08.879833589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:26:08.880821 kubelet[2775]: E1027 08:26:08.880704 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:26:08.884265 kubelet[2775]: E1027 08:26:08.882162 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:26:08.884265 kubelet[2775]: E1027 08:26:08.882738 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mb4zh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9d8b896f8-4w9rr_calico-apiserver(0ff60fd1-1efa-4af7-a609-59d81e9c7a0f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:26:08.884265 kubelet[2775]: E1027 08:26:08.884185 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d8b896f8-4w9rr" podUID="0ff60fd1-1efa-4af7-a609-59d81e9c7a0f" Oct 27 08:26:08.884870 containerd[1614]: time="2025-10-27T08:26:08.883778680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 08:26:08.936314 sshd[5059]: Connection closed by 139.178.89.65 port 43848 Oct 27 08:26:08.936894 sshd-session[5056]: pam_unix(sshd:session): session closed for user core Oct 27 08:26:08.952740 systemd[1]: sshd@17-143.198.224.48:22-139.178.89.65:43848.service: Deactivated successfully. Oct 27 08:26:08.957133 systemd[1]: session-18.scope: Deactivated successfully. Oct 27 08:26:08.959544 systemd-logind[1572]: Session 18 logged out. Waiting for processes to exit. Oct 27 08:26:08.965827 systemd[1]: Started sshd@18-143.198.224.48:22-139.178.89.65:43862.service - OpenSSH per-connection server daemon (139.178.89.65:43862). Oct 27 08:26:08.969270 systemd-logind[1572]: Removed session 18. Oct 27 08:26:09.079004 sshd[5074]: Accepted publickey for core from 139.178.89.65 port 43862 ssh2: RSA SHA256:rxa87oi8ZZqMD8URaMdjWEem69/UDQnMWUTPMulZcos Oct 27 08:26:09.080949 sshd-session[5074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:26:09.087674 systemd-logind[1572]: New session 19 of user core. Oct 27 08:26:09.097867 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 27 08:26:09.226640 containerd[1614]: time="2025-10-27T08:26:09.226565217Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:26:09.227570 containerd[1614]: time="2025-10-27T08:26:09.227228887Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 08:26:09.227570 containerd[1614]: time="2025-10-27T08:26:09.227469299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 27 08:26:09.228401 kubelet[2775]: E1027 08:26:09.227805 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:26:09.228401 kubelet[2775]: E1027 08:26:09.227857 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:26:09.228401 kubelet[2775]: E1027 08:26:09.227997 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bm584,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jn8zm_calico-system(9d02f52d-c9e1-4d0e-b6df-042109e24c03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 08:26:09.233041 containerd[1614]: time="2025-10-27T08:26:09.232154660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 08:26:09.459856 kubelet[2775]: E1027 08:26:09.459758 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:26:09.542679 sshd[5079]: Connection closed by 139.178.89.65 port 43862 Oct 27 08:26:09.543341 sshd-session[5074]: pam_unix(sshd:session): session closed for user core Oct 27 08:26:09.560959 systemd[1]: sshd@18-143.198.224.48:22-139.178.89.65:43862.service: Deactivated successfully. Oct 27 08:26:09.565670 systemd[1]: session-19.scope: Deactivated successfully. Oct 27 08:26:09.568045 systemd-logind[1572]: Session 19 logged out. Waiting for processes to exit. Oct 27 08:26:09.584200 systemd[1]: Started sshd@19-143.198.224.48:22-139.178.89.65:43876.service - OpenSSH per-connection server daemon (139.178.89.65:43876). Oct 27 08:26:09.587498 systemd-logind[1572]: Removed session 19. Oct 27 08:26:09.693357 sshd[5089]: Accepted publickey for core from 139.178.89.65 port 43876 ssh2: RSA SHA256:rxa87oi8ZZqMD8URaMdjWEem69/UDQnMWUTPMulZcos Oct 27 08:26:09.697489 sshd-session[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:26:09.707410 systemd-logind[1572]: New session 20 of user core. Oct 27 08:26:09.719864 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 27 08:26:09.758980 containerd[1614]: time="2025-10-27T08:26:09.758926078Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:26:09.760917 containerd[1614]: time="2025-10-27T08:26:09.760423002Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 08:26:09.760960 kubelet[2775]: E1027 08:26:09.760813 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:26:09.760960 kubelet[2775]: E1027 08:26:09.760875 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:26:09.761130 kubelet[2775]: E1027 08:26:09.761051 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bm584,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jn8zm_calico-system(9d02f52d-c9e1-4d0e-b6df-042109e24c03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 08:26:09.761342 containerd[1614]: time="2025-10-27T08:26:09.760483007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 27 08:26:09.763785 kubelet[2775]: E1027 08:26:09.763701 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jn8zm" podUID="9d02f52d-c9e1-4d0e-b6df-042109e24c03" Oct 27 08:26:09.910350 sshd[5092]: Connection closed by 139.178.89.65 port 43876 Oct 27 08:26:09.911737 sshd-session[5089]: pam_unix(sshd:session): session closed for user core Oct 27 08:26:09.918827 systemd[1]: sshd@19-143.198.224.48:22-139.178.89.65:43876.service: Deactivated successfully. Oct 27 08:26:09.923227 systemd[1]: session-20.scope: Deactivated successfully. Oct 27 08:26:09.929104 systemd-logind[1572]: Session 20 logged out. Waiting for processes to exit. Oct 27 08:26:09.930690 systemd-logind[1572]: Removed session 20. Oct 27 08:26:10.459590 kubelet[2775]: E1027 08:26:10.459354 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 27 08:26:10.475301 containerd[1614]: time="2025-10-27T08:26:10.475120860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 08:26:11.063186 containerd[1614]: time="2025-10-27T08:26:11.063122859Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:26:11.064059 containerd[1614]: time="2025-10-27T08:26:11.064009379Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 08:26:11.064156 containerd[1614]: time="2025-10-27T08:26:11.064117990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 27 08:26:11.064784 kubelet[2775]: E1027 08:26:11.064589 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:26:11.064784 kubelet[2775]: E1027 08:26:11.064776 2775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:26:11.065572 kubelet[2775]: E1027 08:26:11.065397 2775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tbz25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-664fbc97f9-cb2jp_calico-system(df40acd6-6199-4dea-8b25-0040183349ca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 08:26:11.067532 kubelet[2775]: E1027 08:26:11.066977 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-664fbc97f9-cb2jp" podUID="df40acd6-6199-4dea-8b25-0040183349ca" Oct 27 08:26:14.465434 kubelet[2775]: E1027 08:26:14.465386 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b7d4c49f6-pwfq6" podUID="d9ea689c-020a-4c9b-8879-a266b7c116e3" Oct 27 08:26:14.932351 systemd[1]: Started sshd@20-143.198.224.48:22-139.178.89.65:43878.service - OpenSSH per-connection server daemon (139.178.89.65:43878). Oct 27 08:26:15.081606 sshd[5105]: Accepted publickey for core from 139.178.89.65 port 43878 ssh2: RSA SHA256:rxa87oi8ZZqMD8URaMdjWEem69/UDQnMWUTPMulZcos Oct 27 08:26:15.084255 sshd-session[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:26:15.095596 systemd-logind[1572]: New session 21 of user core. Oct 27 08:26:15.097785 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 27 08:26:15.303744 sshd[5108]: Connection closed by 139.178.89.65 port 43878 Oct 27 08:26:15.304756 sshd-session[5105]: pam_unix(sshd:session): session closed for user core Oct 27 08:26:15.317016 systemd[1]: sshd@20-143.198.224.48:22-139.178.89.65:43878.service: Deactivated successfully. Oct 27 08:26:15.322981 systemd[1]: session-21.scope: Deactivated successfully. Oct 27 08:26:15.324253 systemd-logind[1572]: Session 21 logged out. Waiting for processes to exit. Oct 27 08:26:15.326489 systemd-logind[1572]: Removed session 21. Oct 27 08:26:15.461077 kubelet[2775]: E1027 08:26:15.460990 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d8b896f8-z6l9g" podUID="606a9dd0-b52b-437b-ae5f-d5d2e13b6421" Oct 27 08:26:18.462453 kubelet[2775]: E1027 08:26:18.461715 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x85mb" podUID="3d26e3cf-e6a3-4346-be05-bc637815bb23" Oct 27 08:26:20.319649 systemd[1]: Started sshd@21-143.198.224.48:22-139.178.89.65:53806.service - OpenSSH per-connection server daemon (139.178.89.65:53806). Oct 27 08:26:20.466866 kubelet[2775]: E1027 08:26:20.466458 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d8b896f8-4w9rr" podUID="0ff60fd1-1efa-4af7-a609-59d81e9c7a0f" Oct 27 08:26:20.500375 sshd[5122]: Accepted publickey for core from 139.178.89.65 port 53806 ssh2: RSA SHA256:rxa87oi8ZZqMD8URaMdjWEem69/UDQnMWUTPMulZcos Oct 27 08:26:20.503801 sshd-session[5122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:26:20.511173 systemd-logind[1572]: New session 22 of user core. Oct 27 08:26:20.518885 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 27 08:26:20.874887 sshd[5125]: Connection closed by 139.178.89.65 port 53806 Oct 27 08:26:20.876711 sshd-session[5122]: pam_unix(sshd:session): session closed for user core Oct 27 08:26:20.891075 systemd-logind[1572]: Session 22 logged out. Waiting for processes to exit. Oct 27 08:26:20.893065 systemd[1]: sshd@21-143.198.224.48:22-139.178.89.65:53806.service: Deactivated successfully. Oct 27 08:26:20.898343 systemd[1]: session-22.scope: Deactivated successfully. Oct 27 08:26:20.903956 systemd-logind[1572]: Removed session 22. Oct 27 08:26:20.983300 containerd[1614]: time="2025-10-27T08:26:20.983190525Z" level=info msg="TaskExit event in podsandbox handler container_id:\"797ff5e8f73494a4378f808e979b3d4ba294ad42c2660365c173a58a96df4166\" id:\"83507f7eff0b1dfc8de5e80192151dc4a7014a505ed93ac02b8a3b9f7cc95b22\" pid:5146 exited_at:{seconds:1761553580 nanos:982731772}" Oct 27 08:26:22.463657 kubelet[2775]: E1027 08:26:22.463600 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jn8zm" podUID="9d02f52d-c9e1-4d0e-b6df-042109e24c03" Oct 27 08:26:25.892701 systemd[1]: Started sshd@22-143.198.224.48:22-139.178.89.65:53808.service - OpenSSH per-connection server daemon (139.178.89.65:53808). Oct 27 08:26:25.976534 sshd[5165]: Accepted publickey for core from 139.178.89.65 port 53808 ssh2: RSA SHA256:rxa87oi8ZZqMD8URaMdjWEem69/UDQnMWUTPMulZcos Oct 27 08:26:25.981290 sshd-session[5165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:26:25.997623 systemd-logind[1572]: New session 23 of user core. Oct 27 08:26:26.000366 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 27 08:26:26.179242 sshd[5168]: Connection closed by 139.178.89.65 port 53808 Oct 27 08:26:26.180181 sshd-session[5165]: pam_unix(sshd:session): session closed for user core Oct 27 08:26:26.187130 systemd-logind[1572]: Session 23 logged out. Waiting for processes to exit. Oct 27 08:26:26.187871 systemd[1]: sshd@22-143.198.224.48:22-139.178.89.65:53808.service: Deactivated successfully. Oct 27 08:26:26.192281 systemd[1]: session-23.scope: Deactivated successfully. Oct 27 08:26:26.196221 systemd-logind[1572]: Removed session 23. Oct 27 08:26:26.462271 kubelet[2775]: E1027 08:26:26.462208 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-664fbc97f9-cb2jp" podUID="df40acd6-6199-4dea-8b25-0040183349ca" Oct 27 08:26:27.461220 kubelet[2775]: E1027 08:26:27.460700 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d8b896f8-z6l9g" podUID="606a9dd0-b52b-437b-ae5f-d5d2e13b6421" Oct 27 08:26:27.462817 kubelet[2775]: E1027 08:26:27.462664 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b7d4c49f6-pwfq6" podUID="d9ea689c-020a-4c9b-8879-a266b7c116e3"