Oct 28 05:17:27.089653 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 28 03:19:40 -00 2025 Oct 28 05:17:27.089716 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=449db75fd0bf4f00a7b0da93783dc37f82f4a66df937e11c006397de0369495c Oct 28 05:17:27.089739 kernel: BIOS-provided physical RAM map: Oct 28 05:17:27.089751 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 28 05:17:27.089760 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 28 05:17:27.089770 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 28 05:17:27.089782 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Oct 28 05:17:27.089799 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Oct 28 05:17:27.089810 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 28 05:17:27.089821 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 28 05:17:27.089837 kernel: NX (Execute Disable) protection: active Oct 28 05:17:27.089847 kernel: APIC: Static calls initialized Oct 28 05:17:27.089858 kernel: SMBIOS 2.8 present. Oct 28 05:17:27.089870 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Oct 28 05:17:27.089883 kernel: DMI: Memory slots populated: 1/1 Oct 28 05:17:27.089900 kernel: Hypervisor detected: KVM Oct 28 05:17:27.089916 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Oct 28 05:17:27.089929 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 28 05:17:27.089941 kernel: kvm-clock: using sched offset of 3943730779 cycles Oct 28 05:17:27.089956 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 28 05:17:27.089968 kernel: tsc: Detected 2494.140 MHz processor Oct 28 05:17:27.089981 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 28 05:17:27.089995 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 28 05:17:27.090012 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Oct 28 05:17:27.090756 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 28 05:17:27.090782 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 28 05:17:27.090795 kernel: ACPI: Early table checksum verification disabled Oct 28 05:17:27.090809 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Oct 28 05:17:27.090822 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 05:17:27.090835 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 05:17:27.090856 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 05:17:27.090869 kernel: ACPI: FACS 0x000000007FFE0000 000040 Oct 28 05:17:27.090882 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 05:17:27.090894 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 05:17:27.090908 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 05:17:27.090921 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 05:17:27.090935 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Oct 28 05:17:27.090952 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Oct 28 05:17:27.090965 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Oct 28 05:17:27.090978 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Oct 28 05:17:27.090999 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Oct 28 05:17:27.091010 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Oct 28 05:17:27.091071 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Oct 28 05:17:27.091091 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 28 05:17:27.091106 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 28 05:17:27.091119 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Oct 28 05:17:27.094121 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Oct 28 05:17:27.094153 kernel: Zone ranges: Oct 28 05:17:27.094168 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 28 05:17:27.094193 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Oct 28 05:17:27.094206 kernel: Normal empty Oct 28 05:17:27.094221 kernel: Device empty Oct 28 05:17:27.094236 kernel: Movable zone start for each node Oct 28 05:17:27.094248 kernel: Early memory node ranges Oct 28 05:17:27.094260 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 28 05:17:27.094274 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Oct 28 05:17:27.094293 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Oct 28 05:17:27.094308 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 28 05:17:27.094320 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 28 05:17:27.094333 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Oct 28 05:17:27.094348 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 28 05:17:27.094369 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 28 05:17:27.094382 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 28 05:17:27.094398 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 28 05:17:27.094417 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 28 05:17:27.094431 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 28 05:17:27.094448 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 28 05:17:27.094461 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 28 05:17:27.094476 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 28 05:17:27.094489 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 28 05:17:27.094504 kernel: TSC deadline timer available Oct 28 05:17:27.094521 kernel: CPU topo: Max. logical packages: 1 Oct 28 05:17:27.094535 kernel: CPU topo: Max. logical dies: 1 Oct 28 05:17:27.094548 kernel: CPU topo: Max. dies per package: 1 Oct 28 05:17:27.094561 kernel: CPU topo: Max. threads per core: 1 Oct 28 05:17:27.094575 kernel: CPU topo: Num. cores per package: 2 Oct 28 05:17:27.094587 kernel: CPU topo: Num. threads per package: 2 Oct 28 05:17:27.094600 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Oct 28 05:17:27.094614 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 28 05:17:27.094633 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Oct 28 05:17:27.094646 kernel: Booting paravirtualized kernel on KVM Oct 28 05:17:27.094660 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 28 05:17:27.094674 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 28 05:17:27.094689 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Oct 28 05:17:27.094702 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Oct 28 05:17:27.094716 kernel: pcpu-alloc: [0] 0 1 Oct 28 05:17:27.094734 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 28 05:17:27.094751 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=449db75fd0bf4f00a7b0da93783dc37f82f4a66df937e11c006397de0369495c Oct 28 05:17:27.094764 kernel: random: crng init done Oct 28 05:17:27.094777 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 28 05:17:27.094791 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 28 05:17:27.094804 kernel: Fallback order for Node 0: 0 Oct 28 05:17:27.094823 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Oct 28 05:17:27.094835 kernel: Policy zone: DMA32 Oct 28 05:17:27.094847 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 28 05:17:27.094861 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 28 05:17:27.094875 kernel: Kernel/User page tables isolation: enabled Oct 28 05:17:27.094889 kernel: ftrace: allocating 40092 entries in 157 pages Oct 28 05:17:27.094902 kernel: ftrace: allocated 157 pages with 5 groups Oct 28 05:17:27.094915 kernel: Dynamic Preempt: voluntary Oct 28 05:17:27.094933 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 28 05:17:27.094949 kernel: rcu: RCU event tracing is enabled. Oct 28 05:17:27.094962 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 28 05:17:27.094974 kernel: Trampoline variant of Tasks RCU enabled. Oct 28 05:17:27.094987 kernel: Rude variant of Tasks RCU enabled. Oct 28 05:17:27.095002 kernel: Tracing variant of Tasks RCU enabled. Oct 28 05:17:27.095015 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 28 05:17:27.095050 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 28 05:17:27.095064 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 28 05:17:27.095082 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 28 05:17:27.095097 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 28 05:17:27.095110 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 28 05:17:27.095124 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 28 05:17:27.095138 kernel: Console: colour VGA+ 80x25 Oct 28 05:17:27.095155 kernel: printk: legacy console [tty0] enabled Oct 28 05:17:27.095168 kernel: printk: legacy console [ttyS0] enabled Oct 28 05:17:27.095181 kernel: ACPI: Core revision 20240827 Oct 28 05:17:27.095196 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 28 05:17:27.095224 kernel: APIC: Switch to symmetric I/O mode setup Oct 28 05:17:27.095242 kernel: x2apic enabled Oct 28 05:17:27.095256 kernel: APIC: Switched APIC routing to: physical x2apic Oct 28 05:17:27.095271 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 28 05:17:27.095285 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Oct 28 05:17:27.095303 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Oct 28 05:17:27.095320 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 28 05:17:27.095336 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 28 05:17:27.095350 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 28 05:17:27.095369 kernel: Spectre V2 : Mitigation: Retpolines Oct 28 05:17:27.095382 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 28 05:17:27.095396 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Oct 28 05:17:27.095411 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 28 05:17:27.095426 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 28 05:17:27.095439 kernel: MDS: Mitigation: Clear CPU buffers Oct 28 05:17:27.095454 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 28 05:17:27.095472 kernel: active return thunk: its_return_thunk Oct 28 05:17:27.095487 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 28 05:17:27.095501 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 28 05:17:27.095514 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 28 05:17:27.095528 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 28 05:17:27.095542 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 28 05:17:27.095557 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 28 05:17:27.095575 kernel: Freeing SMP alternatives memory: 32K Oct 28 05:17:27.095590 kernel: pid_max: default: 32768 minimum: 301 Oct 28 05:17:27.095604 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 28 05:17:27.095620 kernel: landlock: Up and running. Oct 28 05:17:27.095633 kernel: SELinux: Initializing. Oct 28 05:17:27.095646 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 28 05:17:27.095661 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 28 05:17:27.095675 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Oct 28 05:17:27.095694 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Oct 28 05:17:27.095707 kernel: signal: max sigframe size: 1776 Oct 28 05:17:27.095722 kernel: rcu: Hierarchical SRCU implementation. Oct 28 05:17:27.095737 kernel: rcu: Max phase no-delay instances is 400. Oct 28 05:17:27.095752 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 28 05:17:27.095766 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 28 05:17:27.095778 kernel: smp: Bringing up secondary CPUs ... Oct 28 05:17:27.095803 kernel: smpboot: x86: Booting SMP configuration: Oct 28 05:17:27.095817 kernel: .... node #0, CPUs: #1 Oct 28 05:17:27.095831 kernel: smp: Brought up 1 node, 2 CPUs Oct 28 05:17:27.095844 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Oct 28 05:17:27.095861 kernel: Memory: 1985340K/2096612K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15960K init, 2084K bss, 106708K reserved, 0K cma-reserved) Oct 28 05:17:27.095875 kernel: devtmpfs: initialized Oct 28 05:17:27.095890 kernel: x86/mm: Memory block size: 128MB Oct 28 05:17:27.095908 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 28 05:17:27.095922 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 28 05:17:27.095938 kernel: pinctrl core: initialized pinctrl subsystem Oct 28 05:17:27.095965 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 28 05:17:27.095980 kernel: audit: initializing netlink subsys (disabled) Oct 28 05:17:27.095994 kernel: audit: type=2000 audit(1761628644.826:1): state=initialized audit_enabled=0 res=1 Oct 28 05:17:27.096008 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 28 05:17:27.096765 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 28 05:17:27.096793 kernel: cpuidle: using governor menu Oct 28 05:17:27.096810 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 28 05:17:27.096825 kernel: dca service started, version 1.12.1 Oct 28 05:17:27.096839 kernel: PCI: Using configuration type 1 for base access Oct 28 05:17:27.096855 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 28 05:17:27.096887 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 28 05:17:27.096910 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 28 05:17:27.096925 kernel: ACPI: Added _OSI(Module Device) Oct 28 05:17:27.096939 kernel: ACPI: Added _OSI(Processor Device) Oct 28 05:17:27.096954 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 28 05:17:27.096968 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 28 05:17:27.096983 kernel: ACPI: Interpreter enabled Oct 28 05:17:27.096996 kernel: ACPI: PM: (supports S0 S5) Oct 28 05:17:27.097016 kernel: ACPI: Using IOAPIC for interrupt routing Oct 28 05:17:27.097030 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 28 05:17:27.097064 kernel: PCI: Using E820 reservations for host bridge windows Oct 28 05:17:27.097079 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 28 05:17:27.097093 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 28 05:17:27.097482 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 28 05:17:27.097710 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Oct 28 05:17:27.097913 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Oct 28 05:17:27.097934 kernel: acpiphp: Slot [3] registered Oct 28 05:17:27.097950 kernel: acpiphp: Slot [4] registered Oct 28 05:17:27.097965 kernel: acpiphp: Slot [5] registered Oct 28 05:17:27.097981 kernel: acpiphp: Slot [6] registered Oct 28 05:17:27.097994 kernel: acpiphp: Slot [7] registered Oct 28 05:17:27.098017 kernel: acpiphp: Slot [8] registered Oct 28 05:17:27.098048 kernel: acpiphp: Slot [9] registered Oct 28 05:17:27.098061 kernel: acpiphp: Slot [10] registered Oct 28 05:17:27.098075 kernel: acpiphp: Slot [11] registered Oct 28 05:17:27.098090 kernel: acpiphp: Slot [12] registered Oct 28 05:17:27.098104 kernel: acpiphp: Slot [13] registered Oct 28 05:17:27.098118 kernel: acpiphp: Slot [14] registered Oct 28 05:17:27.098136 kernel: acpiphp: Slot [15] registered Oct 28 05:17:27.098151 kernel: acpiphp: Slot [16] registered Oct 28 05:17:27.098165 kernel: acpiphp: Slot [17] registered Oct 28 05:17:27.098180 kernel: acpiphp: Slot [18] registered Oct 28 05:17:27.098193 kernel: acpiphp: Slot [19] registered Oct 28 05:17:27.098206 kernel: acpiphp: Slot [20] registered Oct 28 05:17:27.098221 kernel: acpiphp: Slot [21] registered Oct 28 05:17:27.098236 kernel: acpiphp: Slot [22] registered Oct 28 05:17:27.098255 kernel: acpiphp: Slot [23] registered Oct 28 05:17:27.098269 kernel: acpiphp: Slot [24] registered Oct 28 05:17:27.098282 kernel: acpiphp: Slot [25] registered Oct 28 05:17:27.098296 kernel: acpiphp: Slot [26] registered Oct 28 05:17:27.098310 kernel: acpiphp: Slot [27] registered Oct 28 05:17:27.098326 kernel: acpiphp: Slot [28] registered Oct 28 05:17:27.098340 kernel: acpiphp: Slot [29] registered Oct 28 05:17:27.098358 kernel: acpiphp: Slot [30] registered Oct 28 05:17:27.098373 kernel: acpiphp: Slot [31] registered Oct 28 05:17:27.098387 kernel: PCI host bridge to bus 0000:00 Oct 28 05:17:27.098634 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 28 05:17:27.098815 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 28 05:17:27.098990 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 28 05:17:27.103012 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Oct 28 05:17:27.103300 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Oct 28 05:17:27.103480 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 28 05:17:27.103718 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Oct 28 05:17:27.103929 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Oct 28 05:17:27.104195 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Oct 28 05:17:27.104419 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Oct 28 05:17:27.104656 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Oct 28 05:17:27.104848 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Oct 28 05:17:27.105601 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Oct 28 05:17:27.107401 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Oct 28 05:17:27.108528 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Oct 28 05:17:27.112412 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Oct 28 05:17:27.112698 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Oct 28 05:17:27.112909 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 28 05:17:27.113142 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 28 05:17:27.113354 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Oct 28 05:17:27.113562 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Oct 28 05:17:27.113751 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Oct 28 05:17:27.113941 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Oct 28 05:17:27.114153 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Oct 28 05:17:27.114353 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 28 05:17:27.114577 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 28 05:17:27.114770 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Oct 28 05:17:27.114958 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Oct 28 05:17:27.117335 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Oct 28 05:17:27.117598 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 28 05:17:27.117816 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Oct 28 05:17:27.118024 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Oct 28 05:17:27.118256 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Oct 28 05:17:27.118464 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Oct 28 05:17:27.118661 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Oct 28 05:17:27.118855 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Oct 28 05:17:27.121190 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Oct 28 05:17:27.121516 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 28 05:17:27.121726 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Oct 28 05:17:27.121951 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Oct 28 05:17:27.122176 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Oct 28 05:17:27.122385 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 28 05:17:27.122592 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Oct 28 05:17:27.122796 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Oct 28 05:17:27.122982 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Oct 28 05:17:27.123199 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Oct 28 05:17:27.123380 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Oct 28 05:17:27.123570 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Oct 28 05:17:27.123589 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 28 05:17:27.123604 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 28 05:17:27.123619 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 28 05:17:27.123634 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 28 05:17:27.123650 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 28 05:17:27.123665 kernel: iommu: Default domain type: Translated Oct 28 05:17:27.123684 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 28 05:17:27.123699 kernel: PCI: Using ACPI for IRQ routing Oct 28 05:17:27.123714 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 28 05:17:27.123728 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 28 05:17:27.123742 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Oct 28 05:17:27.123931 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 28 05:17:27.124190 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 28 05:17:27.124377 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 28 05:17:27.124396 kernel: vgaarb: loaded Oct 28 05:17:27.124411 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 28 05:17:27.124426 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 28 05:17:27.124441 kernel: clocksource: Switched to clocksource kvm-clock Oct 28 05:17:27.124456 kernel: VFS: Disk quotas dquot_6.6.0 Oct 28 05:17:27.124472 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 28 05:17:27.124491 kernel: pnp: PnP ACPI init Oct 28 05:17:27.124506 kernel: pnp: PnP ACPI: found 4 devices Oct 28 05:17:27.124522 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 28 05:17:27.124536 kernel: NET: Registered PF_INET protocol family Oct 28 05:17:27.124550 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 28 05:17:27.124566 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 28 05:17:27.124580 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 28 05:17:27.124599 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 28 05:17:27.124614 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 28 05:17:27.124629 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 28 05:17:27.124644 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 28 05:17:27.124660 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 28 05:17:27.124674 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 28 05:17:27.124689 kernel: NET: Registered PF_XDP protocol family Oct 28 05:17:27.124872 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 28 05:17:27.125055 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 28 05:17:27.125221 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 28 05:17:27.125383 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Oct 28 05:17:27.125545 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Oct 28 05:17:27.125735 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 28 05:17:27.125925 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 28 05:17:27.125953 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 28 05:17:27.126178 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 25771 usecs Oct 28 05:17:27.126198 kernel: PCI: CLS 0 bytes, default 64 Oct 28 05:17:27.126215 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 28 05:17:27.126230 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Oct 28 05:17:27.126246 kernel: Initialise system trusted keyrings Oct 28 05:17:27.126267 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 28 05:17:27.126282 kernel: Key type asymmetric registered Oct 28 05:17:27.126297 kernel: Asymmetric key parser 'x509' registered Oct 28 05:17:27.126312 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 28 05:17:27.126327 kernel: io scheduler mq-deadline registered Oct 28 05:17:27.126342 kernel: io scheduler kyber registered Oct 28 05:17:27.126357 kernel: io scheduler bfq registered Oct 28 05:17:27.126377 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 28 05:17:27.126393 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Oct 28 05:17:27.126409 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 28 05:17:27.126424 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 28 05:17:27.126439 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 28 05:17:27.126454 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 28 05:17:27.126469 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 28 05:17:27.126484 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 28 05:17:27.126502 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 28 05:17:27.126709 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 28 05:17:27.126730 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 28 05:17:27.126897 kernel: rtc_cmos 00:03: registered as rtc0 Oct 28 05:17:27.127086 kernel: rtc_cmos 00:03: setting system clock to 2025-10-28T05:17:25 UTC (1761628645) Oct 28 05:17:27.127260 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Oct 28 05:17:27.127283 kernel: intel_pstate: CPU model not supported Oct 28 05:17:27.127298 kernel: NET: Registered PF_INET6 protocol family Oct 28 05:17:27.127313 kernel: Segment Routing with IPv6 Oct 28 05:17:27.127328 kernel: In-situ OAM (IOAM) with IPv6 Oct 28 05:17:27.127342 kernel: NET: Registered PF_PACKET protocol family Oct 28 05:17:27.127357 kernel: Key type dns_resolver registered Oct 28 05:17:27.127371 kernel: IPI shorthand broadcast: enabled Oct 28 05:17:27.127391 kernel: sched_clock: Marking stable (1219002200, 149164418)->(1401085723, -32919105) Oct 28 05:17:27.127405 kernel: registered taskstats version 1 Oct 28 05:17:27.127420 kernel: Loading compiled-in X.509 certificates Oct 28 05:17:27.127435 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: a9d98af1927e389c63ed03bf44a9f2758bf88a8e' Oct 28 05:17:27.127449 kernel: Demotion targets for Node 0: null Oct 28 05:17:27.127465 kernel: Key type .fscrypt registered Oct 28 05:17:27.127480 kernel: Key type fscrypt-provisioning registered Oct 28 05:17:27.127523 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 28 05:17:27.127542 kernel: ima: Allocated hash algorithm: sha1 Oct 28 05:17:27.127557 kernel: ima: No architecture policies found Oct 28 05:17:27.127572 kernel: clk: Disabling unused clocks Oct 28 05:17:27.127587 kernel: Freeing unused kernel image (initmem) memory: 15960K Oct 28 05:17:27.127603 kernel: Write protecting the kernel read-only data: 45056k Oct 28 05:17:27.127618 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Oct 28 05:17:27.127638 kernel: Run /init as init process Oct 28 05:17:27.127653 kernel: with arguments: Oct 28 05:17:27.127669 kernel: /init Oct 28 05:17:27.127685 kernel: with environment: Oct 28 05:17:27.127700 kernel: HOME=/ Oct 28 05:17:27.127714 kernel: TERM=linux Oct 28 05:17:27.127730 kernel: SCSI subsystem initialized Oct 28 05:17:27.127745 kernel: libata version 3.00 loaded. Oct 28 05:17:27.127945 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 28 05:17:27.128228 kernel: scsi host0: ata_piix Oct 28 05:17:27.128433 kernel: scsi host1: ata_piix Oct 28 05:17:27.128454 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Oct 28 05:17:27.128470 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Oct 28 05:17:27.128492 kernel: ACPI: bus type USB registered Oct 28 05:17:27.128508 kernel: usbcore: registered new interface driver usbfs Oct 28 05:17:27.128525 kernel: usbcore: registered new interface driver hub Oct 28 05:17:27.128541 kernel: usbcore: registered new device driver usb Oct 28 05:17:27.128726 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Oct 28 05:17:27.128913 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Oct 28 05:17:27.129657 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Oct 28 05:17:27.129878 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Oct 28 05:17:27.130830 kernel: hub 1-0:1.0: USB hub found Oct 28 05:17:27.131097 kernel: hub 1-0:1.0: 2 ports detected Oct 28 05:17:27.131334 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Oct 28 05:17:27.131520 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Oct 28 05:17:27.131543 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 28 05:17:27.131561 kernel: GPT:16515071 != 125829119 Oct 28 05:17:27.131578 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 28 05:17:27.131600 kernel: GPT:16515071 != 125829119 Oct 28 05:17:27.131620 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 28 05:17:27.131637 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 28 05:17:27.131833 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Oct 28 05:17:27.132132 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Oct 28 05:17:27.132337 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Oct 28 05:17:27.132546 kernel: scsi host2: Virtio SCSI HBA Oct 28 05:17:27.132577 kernel: Invalid ELF header magic: != \u007fELF Oct 28 05:17:27.132597 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 28 05:17:27.132613 kernel: device-mapper: uevent: version 1.0.3 Oct 28 05:17:27.132631 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 28 05:17:27.132648 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 28 05:17:27.132665 kernel: Invalid ELF header magic: != \u007fELF Oct 28 05:17:27.132681 kernel: Invalid ELF header magic: != \u007fELF Oct 28 05:17:27.132701 kernel: raid6: avx2x4 gen() 13747 MB/s Oct 28 05:17:27.132718 kernel: raid6: avx2x2 gen() 15534 MB/s Oct 28 05:17:27.132735 kernel: raid6: avx2x1 gen() 11567 MB/s Oct 28 05:17:27.132751 kernel: raid6: using algorithm avx2x2 gen() 15534 MB/s Oct 28 05:17:27.132768 kernel: raid6: .... xor() 14325 MB/s, rmw enabled Oct 28 05:17:27.132785 kernel: raid6: using avx2x2 recovery algorithm Oct 28 05:17:27.132802 kernel: Invalid ELF header magic: != \u007fELF Oct 28 05:17:27.132822 kernel: Invalid ELF header magic: != \u007fELF Oct 28 05:17:27.132838 kernel: Invalid ELF header magic: != \u007fELF Oct 28 05:17:27.132855 kernel: xor: automatically using best checksumming function avx Oct 28 05:17:27.132872 kernel: Invalid ELF header magic: != \u007fELF Oct 28 05:17:27.132887 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 28 05:17:27.132905 kernel: BTRFS: device fsid 98ad3ab2-0171-42ae-a5fc-7be2369f5a89 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (156) Oct 28 05:17:27.132922 kernel: BTRFS info (device dm-0): first mount of filesystem 98ad3ab2-0171-42ae-a5fc-7be2369f5a89 Oct 28 05:17:27.132938 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 28 05:17:27.132960 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 28 05:17:27.132992 kernel: BTRFS info (device dm-0): enabling free space tree Oct 28 05:17:27.133008 kernel: Invalid ELF header magic: != \u007fELF Oct 28 05:17:27.133024 kernel: loop: module loaded Oct 28 05:17:27.133059 kernel: loop0: detected capacity change from 0 to 100136 Oct 28 05:17:27.133076 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 28 05:17:27.133096 systemd[1]: Successfully made /usr/ read-only. Oct 28 05:17:27.133124 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 28 05:17:27.133142 systemd[1]: Detected virtualization kvm. Oct 28 05:17:27.133160 systemd[1]: Detected architecture x86-64. Oct 28 05:17:27.133177 systemd[1]: Running in initrd. Oct 28 05:17:27.133194 systemd[1]: No hostname configured, using default hostname. Oct 28 05:17:27.133216 systemd[1]: Hostname set to . Oct 28 05:17:27.133233 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 28 05:17:27.133250 systemd[1]: Queued start job for default target initrd.target. Oct 28 05:17:27.133267 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 28 05:17:27.133282 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 28 05:17:27.133297 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 28 05:17:27.133313 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 28 05:17:27.133333 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 28 05:17:27.133349 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 28 05:17:27.133366 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 28 05:17:27.133381 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 28 05:17:27.133396 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 28 05:17:27.133412 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 28 05:17:27.133431 systemd[1]: Reached target paths.target - Path Units. Oct 28 05:17:27.133447 systemd[1]: Reached target slices.target - Slice Units. Oct 28 05:17:27.133463 systemd[1]: Reached target swap.target - Swaps. Oct 28 05:17:27.133700 systemd[1]: Reached target timers.target - Timer Units. Oct 28 05:17:27.133718 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 28 05:17:27.133734 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 28 05:17:27.133750 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 28 05:17:27.133771 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 28 05:17:27.133787 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 28 05:17:27.133803 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 28 05:17:27.133819 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 28 05:17:27.133834 systemd[1]: Reached target sockets.target - Socket Units. Oct 28 05:17:27.133850 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 28 05:17:27.133870 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 28 05:17:27.133886 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 28 05:17:27.133902 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 28 05:17:27.133920 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 28 05:17:27.135327 systemd[1]: Starting systemd-fsck-usr.service... Oct 28 05:17:27.135349 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 28 05:17:27.135366 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 28 05:17:27.135389 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 05:17:27.135406 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 28 05:17:27.135511 systemd-journald[293]: Collecting audit messages is disabled. Oct 28 05:17:27.135554 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 28 05:17:27.135571 systemd[1]: Finished systemd-fsck-usr.service. Oct 28 05:17:27.135587 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 28 05:17:27.135606 systemd-journald[293]: Journal started Oct 28 05:17:27.135642 systemd-journald[293]: Runtime Journal (/run/log/journal/54c34fb158524103a082efa0e99d1082) is 4.9M, max 39.1M, 34.2M free. Oct 28 05:17:27.138073 systemd[1]: Started systemd-journald.service - Journal Service. Oct 28 05:17:27.156380 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 28 05:17:27.163071 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 28 05:17:27.171578 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 28 05:17:27.173290 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 28 05:17:27.242593 kernel: Bridge firewalling registered Oct 28 05:17:27.182123 systemd-modules-load[294]: Inserted module 'br_netfilter' Oct 28 05:17:27.186903 systemd-tmpfiles[307]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 28 05:17:27.247332 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 28 05:17:27.249094 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 05:17:27.250281 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 28 05:17:27.255480 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 28 05:17:27.259365 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 28 05:17:27.275184 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 28 05:17:27.290302 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 28 05:17:27.293771 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 28 05:17:27.303540 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 28 05:17:27.316713 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 28 05:17:27.345645 dracut-cmdline[333]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=449db75fd0bf4f00a7b0da93783dc37f82f4a66df937e11c006397de0369495c Oct 28 05:17:27.373873 systemd-resolved[327]: Positive Trust Anchors: Oct 28 05:17:27.373887 systemd-resolved[327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 28 05:17:27.373891 systemd-resolved[327]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 28 05:17:27.373929 systemd-resolved[327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 28 05:17:27.403463 systemd-resolved[327]: Defaulting to hostname 'linux'. Oct 28 05:17:27.405526 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 28 05:17:27.407712 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 28 05:17:27.490065 kernel: Loading iSCSI transport class v2.0-870. Oct 28 05:17:27.507059 kernel: iscsi: registered transport (tcp) Oct 28 05:17:27.537066 kernel: iscsi: registered transport (qla4xxx) Oct 28 05:17:27.537144 kernel: QLogic iSCSI HBA Driver Oct 28 05:17:27.570990 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 28 05:17:27.608657 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 28 05:17:27.609887 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 28 05:17:27.671203 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 28 05:17:27.674319 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 28 05:17:27.676341 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 28 05:17:27.718717 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 28 05:17:27.722213 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 28 05:17:27.751385 systemd-udevd[575]: Using default interface naming scheme 'v257'. Oct 28 05:17:27.763014 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 28 05:17:27.767792 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 28 05:17:27.800084 dracut-pre-trigger[642]: rd.md=0: removing MD RAID activation Oct 28 05:17:27.800893 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 28 05:17:27.809064 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 28 05:17:27.845439 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 28 05:17:27.847633 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 28 05:17:27.869042 systemd-networkd[686]: lo: Link UP Oct 28 05:17:27.869726 systemd-networkd[686]: lo: Gained carrier Oct 28 05:17:27.870521 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 28 05:17:27.871141 systemd[1]: Reached target network.target - Network. Oct 28 05:17:27.941385 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 28 05:17:27.945307 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 28 05:17:28.052381 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 28 05:17:28.068176 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 28 05:17:28.092946 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 28 05:17:28.097395 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 28 05:17:28.125079 disk-uuid[741]: Primary Header is updated. Oct 28 05:17:28.125079 disk-uuid[741]: Secondary Entries is updated. Oct 28 05:17:28.125079 disk-uuid[741]: Secondary Header is updated. Oct 28 05:17:28.128535 kernel: cryptd: max_cpu_qlen set to 1000 Oct 28 05:17:28.171608 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 28 05:17:28.188229 systemd-networkd[686]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Oct 28 05:17:28.188241 systemd-networkd[686]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Oct 28 05:17:28.191454 systemd-networkd[686]: eth0: Link UP Oct 28 05:17:28.191752 systemd-networkd[686]: eth0: Gained carrier Oct 28 05:17:28.191774 systemd-networkd[686]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Oct 28 05:17:28.214279 kernel: AES CTR mode by8 optimization enabled Oct 28 05:17:28.211039 systemd-networkd[686]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 05:17:28.211056 systemd-networkd[686]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 28 05:17:28.211810 systemd-networkd[686]: eth1: Link UP Oct 28 05:17:28.221543 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 28 05:17:28.221758 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 05:17:28.223135 systemd-networkd[686]: eth0: DHCPv4 address 164.92.80.11/20, gateway 164.92.80.1 acquired from 169.254.169.253 Oct 28 05:17:28.223588 systemd-networkd[686]: eth1: Gained carrier Oct 28 05:17:28.223608 systemd-networkd[686]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 05:17:28.260268 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 28 05:17:28.226544 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 05:17:28.244367 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 05:17:28.262136 systemd-networkd[686]: eth1: DHCPv4 address 10.124.0.21/20 acquired from 169.254.169.253 Oct 28 05:17:28.416908 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 05:17:28.429496 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 28 05:17:28.431464 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 28 05:17:28.432287 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 28 05:17:28.433466 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 28 05:17:28.436495 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 28 05:17:28.472122 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 28 05:17:29.213923 disk-uuid[743]: Warning: The kernel is still using the old partition table. Oct 28 05:17:29.213923 disk-uuid[743]: The new table will be used at the next reboot or after you Oct 28 05:17:29.213923 disk-uuid[743]: run partprobe(8) or kpartx(8) Oct 28 05:17:29.213923 disk-uuid[743]: The operation has completed successfully. Oct 28 05:17:29.221759 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 28 05:17:29.221961 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 28 05:17:29.224971 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 28 05:17:29.271898 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (835) Oct 28 05:17:29.272177 kernel: BTRFS info (device vda6): first mount of filesystem 7acd037c-32ce-4796-90d6-101869832417 Oct 28 05:17:29.276124 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 28 05:17:29.281963 kernel: BTRFS info (device vda6): turning on async discard Oct 28 05:17:29.282111 kernel: BTRFS info (device vda6): enabling free space tree Oct 28 05:17:29.293064 kernel: BTRFS info (device vda6): last unmount of filesystem 7acd037c-32ce-4796-90d6-101869832417 Oct 28 05:17:29.294968 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 28 05:17:29.298159 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 28 05:17:29.480254 systemd-networkd[686]: eth0: Gained IPv6LL Oct 28 05:17:29.536524 ignition[854]: Ignition 2.22.0 Oct 28 05:17:29.537465 ignition[854]: Stage: fetch-offline Oct 28 05:17:29.537548 ignition[854]: no configs at "/usr/lib/ignition/base.d" Oct 28 05:17:29.537567 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 28 05:17:29.537716 ignition[854]: parsed url from cmdline: "" Oct 28 05:17:29.537720 ignition[854]: no config URL provided Oct 28 05:17:29.537727 ignition[854]: reading system config file "/usr/lib/ignition/user.ign" Oct 28 05:17:29.537737 ignition[854]: no config at "/usr/lib/ignition/user.ign" Oct 28 05:17:29.540731 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 28 05:17:29.537743 ignition[854]: failed to fetch config: resource requires networking Oct 28 05:17:29.537944 ignition[854]: Ignition finished successfully Oct 28 05:17:29.543527 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 28 05:17:29.586153 ignition[861]: Ignition 2.22.0 Oct 28 05:17:29.586169 ignition[861]: Stage: fetch Oct 28 05:17:29.586390 ignition[861]: no configs at "/usr/lib/ignition/base.d" Oct 28 05:17:29.586400 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 28 05:17:29.586550 ignition[861]: parsed url from cmdline: "" Oct 28 05:17:29.586556 ignition[861]: no config URL provided Oct 28 05:17:29.586565 ignition[861]: reading system config file "/usr/lib/ignition/user.ign" Oct 28 05:17:29.586577 ignition[861]: no config at "/usr/lib/ignition/user.ign" Oct 28 05:17:29.586629 ignition[861]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Oct 28 05:17:29.603505 ignition[861]: GET result: OK Oct 28 05:17:29.604276 ignition[861]: parsing config with SHA512: 7c4f543b189b6b2a27a25cbc393e1d682cf2b59601f6b4297a500ec688669a66d81c30227640b64a23400f233c6e856c378abc4cf187a05584b17f356d06d80b Oct 28 05:17:29.608295 systemd-networkd[686]: eth1: Gained IPv6LL Oct 28 05:17:29.614116 unknown[861]: fetched base config from "system" Oct 28 05:17:29.614142 unknown[861]: fetched base config from "system" Oct 28 05:17:29.614153 unknown[861]: fetched user config from "digitalocean" Oct 28 05:17:29.615206 ignition[861]: fetch: fetch complete Oct 28 05:17:29.615216 ignition[861]: fetch: fetch passed Oct 28 05:17:29.615303 ignition[861]: Ignition finished successfully Oct 28 05:17:29.619226 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 28 05:17:29.621647 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 28 05:17:29.679391 ignition[868]: Ignition 2.22.0 Oct 28 05:17:29.680358 ignition[868]: Stage: kargs Oct 28 05:17:29.680602 ignition[868]: no configs at "/usr/lib/ignition/base.d" Oct 28 05:17:29.680613 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 28 05:17:29.684201 ignition[868]: kargs: kargs passed Oct 28 05:17:29.684722 ignition[868]: Ignition finished successfully Oct 28 05:17:29.686719 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 28 05:17:29.691297 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 28 05:17:29.735005 ignition[874]: Ignition 2.22.0 Oct 28 05:17:29.735033 ignition[874]: Stage: disks Oct 28 05:17:29.735276 ignition[874]: no configs at "/usr/lib/ignition/base.d" Oct 28 05:17:29.735287 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 28 05:17:29.736561 ignition[874]: disks: disks passed Oct 28 05:17:29.736621 ignition[874]: Ignition finished successfully Oct 28 05:17:29.739480 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 28 05:17:29.740925 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 28 05:17:29.742042 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 28 05:17:29.742641 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 28 05:17:29.743620 systemd[1]: Reached target sysinit.target - System Initialization. Oct 28 05:17:29.744605 systemd[1]: Reached target basic.target - Basic System. Oct 28 05:17:29.746745 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 28 05:17:29.788675 systemd-fsck[883]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 28 05:17:29.794629 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 28 05:17:29.799211 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 28 05:17:29.941071 kernel: EXT4-fs (vda9): mounted filesystem 0ce42fa0-8451-4928-b788-6e54ab295d7a r/w with ordered data mode. Quota mode: none. Oct 28 05:17:29.941880 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 28 05:17:29.943227 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 28 05:17:29.946395 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 28 05:17:29.949287 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 28 05:17:29.954297 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Oct 28 05:17:29.962255 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 28 05:17:29.966286 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 28 05:17:29.966352 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 28 05:17:29.976046 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (892) Oct 28 05:17:29.980192 kernel: BTRFS info (device vda6): first mount of filesystem 7acd037c-32ce-4796-90d6-101869832417 Oct 28 05:17:29.979007 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 28 05:17:29.983559 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 28 05:17:29.995220 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 28 05:17:30.022338 kernel: BTRFS info (device vda6): turning on async discard Oct 28 05:17:30.022427 kernel: BTRFS info (device vda6): enabling free space tree Oct 28 05:17:30.042333 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 28 05:17:30.074047 coreos-metadata[895]: Oct 28 05:17:30.073 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 28 05:17:30.083200 coreos-metadata[894]: Oct 28 05:17:30.083 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 28 05:17:30.084859 coreos-metadata[895]: Oct 28 05:17:30.084 INFO Fetch successful Oct 28 05:17:30.089952 coreos-metadata[895]: Oct 28 05:17:30.089 INFO wrote hostname ci-4501.0.0-n-a8513f8a3e to /sysroot/etc/hostname Oct 28 05:17:30.091238 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 28 05:17:30.094008 initrd-setup-root[922]: cut: /sysroot/etc/passwd: No such file or directory Oct 28 05:17:30.098228 coreos-metadata[894]: Oct 28 05:17:30.098 INFO Fetch successful Oct 28 05:17:30.106097 initrd-setup-root[930]: cut: /sysroot/etc/group: No such file or directory Oct 28 05:17:30.109317 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Oct 28 05:17:30.109680 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Oct 28 05:17:30.114635 initrd-setup-root[938]: cut: /sysroot/etc/shadow: No such file or directory Oct 28 05:17:30.121294 initrd-setup-root[945]: cut: /sysroot/etc/gshadow: No such file or directory Oct 28 05:17:30.255172 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 28 05:17:30.258933 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 28 05:17:30.262327 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 28 05:17:30.294589 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 28 05:17:30.295577 kernel: BTRFS info (device vda6): last unmount of filesystem 7acd037c-32ce-4796-90d6-101869832417 Oct 28 05:17:30.324974 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 28 05:17:30.346685 ignition[1013]: INFO : Ignition 2.22.0 Oct 28 05:17:30.347620 ignition[1013]: INFO : Stage: mount Oct 28 05:17:30.350133 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 28 05:17:30.350133 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 28 05:17:30.352125 ignition[1013]: INFO : mount: mount passed Oct 28 05:17:30.352745 ignition[1013]: INFO : Ignition finished successfully Oct 28 05:17:30.353455 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 28 05:17:30.356125 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 28 05:17:30.379992 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 28 05:17:30.405064 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1025) Oct 28 05:17:30.405339 kernel: BTRFS info (device vda6): first mount of filesystem 7acd037c-32ce-4796-90d6-101869832417 Oct 28 05:17:30.407386 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 28 05:17:30.412297 kernel: BTRFS info (device vda6): turning on async discard Oct 28 05:17:30.412374 kernel: BTRFS info (device vda6): enabling free space tree Oct 28 05:17:30.415795 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 28 05:17:30.465064 ignition[1041]: INFO : Ignition 2.22.0 Oct 28 05:17:30.465064 ignition[1041]: INFO : Stage: files Oct 28 05:17:30.466533 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 28 05:17:30.466533 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 28 05:17:30.468061 ignition[1041]: DEBUG : files: compiled without relabeling support, skipping Oct 28 05:17:30.468061 ignition[1041]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 28 05:17:30.468061 ignition[1041]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 28 05:17:30.472995 ignition[1041]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 28 05:17:30.474120 ignition[1041]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 28 05:17:30.474120 ignition[1041]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 28 05:17:30.473655 unknown[1041]: wrote ssh authorized keys file for user: core Oct 28 05:17:30.476959 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 28 05:17:30.476959 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 28 05:17:30.592654 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 28 05:17:30.670533 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 28 05:17:30.670533 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 28 05:17:30.672594 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 28 05:17:30.672594 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 28 05:17:30.672594 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 28 05:17:30.672594 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 28 05:17:30.672594 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 28 05:17:30.672594 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 28 05:17:30.672594 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 28 05:17:30.672594 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 28 05:17:30.678447 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 28 05:17:30.678447 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 28 05:17:30.678447 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 28 05:17:30.678447 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 28 05:17:30.678447 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Oct 28 05:17:31.195060 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 28 05:17:33.410777 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 28 05:17:33.410777 ignition[1041]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 28 05:17:33.413460 ignition[1041]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 28 05:17:33.413460 ignition[1041]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 28 05:17:33.413460 ignition[1041]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 28 05:17:33.413460 ignition[1041]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 28 05:17:33.413460 ignition[1041]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 28 05:17:33.413460 ignition[1041]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 28 05:17:33.419844 ignition[1041]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 28 05:17:33.419844 ignition[1041]: INFO : files: files passed Oct 28 05:17:33.419844 ignition[1041]: INFO : Ignition finished successfully Oct 28 05:17:33.417235 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 28 05:17:33.422252 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 28 05:17:33.425505 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 28 05:17:33.437272 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 28 05:17:33.438282 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 28 05:17:33.449716 initrd-setup-root-after-ignition[1073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 28 05:17:33.450987 initrd-setup-root-after-ignition[1073]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 28 05:17:33.452694 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 28 05:17:33.454452 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 28 05:17:33.455853 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 28 05:17:33.457957 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 28 05:17:33.522847 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 28 05:17:33.522977 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 28 05:17:33.524528 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 28 05:17:33.525511 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 28 05:17:33.526834 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 28 05:17:33.528070 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 28 05:17:33.572278 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 28 05:17:33.574607 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 28 05:17:33.611154 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 28 05:17:33.611427 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 28 05:17:33.613967 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 28 05:17:33.614785 systemd[1]: Stopped target timers.target - Timer Units. Oct 28 05:17:33.616329 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 28 05:17:33.616569 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 28 05:17:33.618318 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 28 05:17:33.619653 systemd[1]: Stopped target basic.target - Basic System. Oct 28 05:17:33.620687 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 28 05:17:33.621610 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 28 05:17:33.622637 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 28 05:17:33.623675 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 28 05:17:33.624875 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 28 05:17:33.625904 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 28 05:17:33.626988 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 28 05:17:33.628217 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 28 05:17:33.629074 systemd[1]: Stopped target swap.target - Swaps. Oct 28 05:17:33.630078 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 28 05:17:33.630246 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 28 05:17:33.631488 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 28 05:17:33.632300 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 28 05:17:33.633405 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 28 05:17:33.633491 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 28 05:17:33.634530 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 28 05:17:33.634665 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 28 05:17:33.636499 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 28 05:17:33.636707 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 28 05:17:33.638225 systemd[1]: ignition-files.service: Deactivated successfully. Oct 28 05:17:33.638407 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 28 05:17:33.639434 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 28 05:17:33.639628 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 28 05:17:33.643303 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 28 05:17:33.644068 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 28 05:17:33.644265 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 28 05:17:33.651087 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 28 05:17:33.654721 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 28 05:17:33.655624 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 28 05:17:33.657226 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 28 05:17:33.658082 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 28 05:17:33.659503 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 28 05:17:33.660587 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 28 05:17:33.675073 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 28 05:17:33.676102 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 28 05:17:33.695006 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 28 05:17:33.710744 ignition[1097]: INFO : Ignition 2.22.0 Oct 28 05:17:33.710744 ignition[1097]: INFO : Stage: umount Oct 28 05:17:33.710744 ignition[1097]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 28 05:17:33.710744 ignition[1097]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 28 05:17:33.715199 ignition[1097]: INFO : umount: umount passed Oct 28 05:17:33.715199 ignition[1097]: INFO : Ignition finished successfully Oct 28 05:17:33.713882 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 28 05:17:33.714735 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 28 05:17:33.716574 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 28 05:17:33.716730 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 28 05:17:33.720768 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 28 05:17:33.720873 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 28 05:17:33.721776 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 28 05:17:33.721864 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 28 05:17:33.722726 systemd[1]: Stopped target network.target - Network. Oct 28 05:17:33.727594 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 28 05:17:33.727740 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 28 05:17:33.729419 systemd[1]: Stopped target paths.target - Path Units. Oct 28 05:17:33.732176 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 28 05:17:33.738168 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 28 05:17:33.738967 systemd[1]: Stopped target slices.target - Slice Units. Oct 28 05:17:33.740307 systemd[1]: Stopped target sockets.target - Socket Units. Oct 28 05:17:33.741368 systemd[1]: iscsid.socket: Deactivated successfully. Oct 28 05:17:33.741440 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 28 05:17:33.742346 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 28 05:17:33.742407 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 28 05:17:33.743254 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 28 05:17:33.743354 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 28 05:17:33.744316 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 28 05:17:33.744395 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 28 05:17:33.745395 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 28 05:17:33.746292 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 28 05:17:33.748796 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 28 05:17:33.748917 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 28 05:17:33.750453 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 28 05:17:33.750521 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 28 05:17:33.757946 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 28 05:17:33.758649 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 28 05:17:33.762928 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 28 05:17:33.763182 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 28 05:17:33.768297 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 28 05:17:33.769016 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 28 05:17:33.769101 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 28 05:17:33.771458 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 28 05:17:33.772139 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 28 05:17:33.772233 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 28 05:17:33.772937 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 28 05:17:33.773004 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 28 05:17:33.773653 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 28 05:17:33.773717 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 28 05:17:33.774764 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 28 05:17:33.803184 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 28 05:17:33.803430 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 28 05:17:33.805357 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 28 05:17:33.805467 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 28 05:17:33.806626 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 28 05:17:33.806680 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 28 05:17:33.807822 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 28 05:17:33.807969 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 28 05:17:33.811419 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 28 05:17:33.811502 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 28 05:17:33.812566 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 28 05:17:33.812649 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 28 05:17:33.818620 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 28 05:17:33.819363 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 28 05:17:33.819459 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 28 05:17:33.820910 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 28 05:17:33.820996 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 28 05:17:33.822551 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 28 05:17:33.822636 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 05:17:33.827154 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 28 05:17:33.827304 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 28 05:17:33.835579 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 28 05:17:33.835747 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 28 05:17:33.837899 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 28 05:17:33.839822 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 28 05:17:33.865636 systemd[1]: Switching root. Oct 28 05:17:33.916441 systemd-journald[293]: Journal stopped Oct 28 05:17:35.103735 systemd-journald[293]: Received SIGTERM from PID 1 (systemd). Oct 28 05:17:35.103836 kernel: SELinux: policy capability network_peer_controls=1 Oct 28 05:17:35.103945 kernel: SELinux: policy capability open_perms=1 Oct 28 05:17:35.103963 kernel: SELinux: policy capability extended_socket_class=1 Oct 28 05:17:35.103978 kernel: SELinux: policy capability always_check_network=0 Oct 28 05:17:35.103997 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 28 05:17:35.104542 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 28 05:17:35.104574 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 28 05:17:35.104602 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 28 05:17:35.104621 kernel: SELinux: policy capability userspace_initial_context=0 Oct 28 05:17:35.104639 kernel: audit: type=1403 audit(1761628654.046:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 28 05:17:35.104663 systemd[1]: Successfully loaded SELinux policy in 70.414ms. Oct 28 05:17:35.104681 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.329ms. Oct 28 05:17:35.104701 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 28 05:17:35.104715 systemd[1]: Detected virtualization kvm. Oct 28 05:17:35.104729 systemd[1]: Detected architecture x86-64. Oct 28 05:17:35.104744 systemd[1]: Detected first boot. Oct 28 05:17:35.104767 systemd[1]: Hostname set to . Oct 28 05:17:35.104781 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 28 05:17:35.104796 zram_generator::config[1142]: No configuration found. Oct 28 05:17:35.104813 kernel: Guest personality initialized and is inactive Oct 28 05:17:35.104825 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 28 05:17:35.104839 kernel: Initialized host personality Oct 28 05:17:35.104851 kernel: NET: Registered PF_VSOCK protocol family Oct 28 05:17:35.104864 systemd[1]: Populated /etc with preset unit settings. Oct 28 05:17:35.104878 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 28 05:17:35.104894 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 28 05:17:35.104908 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 28 05:17:35.104923 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 28 05:17:35.104937 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 28 05:17:35.104951 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 28 05:17:35.104966 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 28 05:17:35.104991 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 28 05:17:35.105009 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 28 05:17:35.105022 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 28 05:17:35.105068 systemd[1]: Created slice user.slice - User and Session Slice. Oct 28 05:17:35.105085 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 28 05:17:35.105105 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 28 05:17:35.105126 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 28 05:17:35.105146 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 28 05:17:35.105173 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 28 05:17:35.105195 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 28 05:17:35.105217 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 28 05:17:35.105239 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 28 05:17:35.105261 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 28 05:17:35.105288 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 28 05:17:35.105306 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 28 05:17:35.105320 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 28 05:17:35.105339 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 28 05:17:35.105362 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 28 05:17:35.105384 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 28 05:17:35.105407 systemd[1]: Reached target slices.target - Slice Units. Oct 28 05:17:35.105435 systemd[1]: Reached target swap.target - Swaps. Oct 28 05:17:35.105461 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 28 05:17:35.105484 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 28 05:17:35.105507 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 28 05:17:35.105529 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 28 05:17:35.105552 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 28 05:17:35.105576 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 28 05:17:35.105599 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 28 05:17:35.105627 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 28 05:17:35.105647 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 28 05:17:35.105669 systemd[1]: Mounting media.mount - External Media Directory... Oct 28 05:17:35.105693 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 05:17:35.105716 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 28 05:17:35.105739 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 28 05:17:35.105769 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 28 05:17:35.105794 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 28 05:17:35.105818 systemd[1]: Reached target machines.target - Containers. Oct 28 05:17:35.105840 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 28 05:17:35.105862 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 05:17:35.105885 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 28 05:17:35.105909 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 28 05:17:35.111112 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 28 05:17:35.111159 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 28 05:17:35.111175 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 28 05:17:35.111189 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 28 05:17:35.111204 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 28 05:17:35.111220 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 28 05:17:35.111235 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 28 05:17:35.111257 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 28 05:17:35.111271 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 28 05:17:35.111285 systemd[1]: Stopped systemd-fsck-usr.service. Oct 28 05:17:35.111301 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 05:17:35.111316 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 28 05:17:35.111330 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 28 05:17:35.111345 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 28 05:17:35.111362 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 28 05:17:35.111376 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 28 05:17:35.111391 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 28 05:17:35.111413 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 05:17:35.111434 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 28 05:17:35.111457 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 28 05:17:35.111477 systemd[1]: Mounted media.mount - External Media Directory. Oct 28 05:17:35.111498 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 28 05:17:35.111520 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 28 05:17:35.111540 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 28 05:17:35.111579 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 28 05:17:35.111600 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 28 05:17:35.111621 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 28 05:17:35.111643 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 28 05:17:35.111668 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 28 05:17:35.111696 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 28 05:17:35.111716 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 28 05:17:35.111736 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 28 05:17:35.111757 kernel: fuse: init (API version 7.41) Oct 28 05:17:35.111776 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 28 05:17:35.111791 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 28 05:17:35.111806 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 28 05:17:35.111824 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 28 05:17:35.111839 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 28 05:17:35.111870 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 28 05:17:35.111885 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 28 05:17:35.111899 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 28 05:17:35.111914 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 28 05:17:35.111931 kernel: ACPI: bus type drm_connector registered Oct 28 05:17:35.111945 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 28 05:17:35.111967 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 28 05:17:35.111990 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 05:17:35.114722 systemd-journald[1216]: Collecting audit messages is disabled. Oct 28 05:17:35.114790 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 28 05:17:35.114811 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 28 05:17:35.114833 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 28 05:17:35.114847 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 28 05:17:35.114862 systemd-journald[1216]: Journal started Oct 28 05:17:35.114889 systemd-journald[1216]: Runtime Journal (/run/log/journal/54c34fb158524103a082efa0e99d1082) is 4.9M, max 39.1M, 34.2M free. Oct 28 05:17:34.719698 systemd[1]: Queued start job for default target multi-user.target. Oct 28 05:17:34.745048 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 28 05:17:34.745576 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 28 05:17:35.122072 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 28 05:17:35.129615 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 28 05:17:35.129693 systemd[1]: Started systemd-journald.service - Journal Service. Oct 28 05:17:35.132701 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 28 05:17:35.132990 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 28 05:17:35.134313 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 28 05:17:35.135514 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 28 05:17:35.137573 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 28 05:17:35.138918 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 28 05:17:35.156567 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 28 05:17:35.159365 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 28 05:17:35.166568 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 28 05:17:35.167815 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 28 05:17:35.170282 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 28 05:17:35.184301 kernel: loop1: detected capacity change from 0 to 111544 Oct 28 05:17:35.206843 systemd-journald[1216]: Time spent on flushing to /var/log/journal/54c34fb158524103a082efa0e99d1082 is 58.330ms for 1003 entries. Oct 28 05:17:35.206843 systemd-journald[1216]: System Journal (/var/log/journal/54c34fb158524103a082efa0e99d1082) is 8M, max 163.5M, 155.5M free. Oct 28 05:17:35.280240 systemd-journald[1216]: Received client request to flush runtime journal. Oct 28 05:17:35.280312 kernel: loop2: detected capacity change from 0 to 8 Oct 28 05:17:35.280339 kernel: loop3: detected capacity change from 0 to 128912 Oct 28 05:17:35.207676 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 28 05:17:35.213105 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 28 05:17:35.217617 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 28 05:17:35.235641 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 28 05:17:35.283534 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 28 05:17:35.305708 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 28 05:17:35.308058 kernel: loop4: detected capacity change from 0 to 219144 Oct 28 05:17:35.314254 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 28 05:17:35.316329 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 28 05:17:35.317486 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 28 05:17:35.336891 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 28 05:17:35.353078 kernel: loop5: detected capacity change from 0 to 111544 Oct 28 05:17:35.372865 kernel: loop6: detected capacity change from 0 to 8 Oct 28 05:17:35.370691 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Oct 28 05:17:35.370711 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Oct 28 05:17:35.385741 kernel: loop7: detected capacity change from 0 to 128912 Oct 28 05:17:35.386181 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 28 05:17:35.408047 kernel: loop1: detected capacity change from 0 to 219144 Oct 28 05:17:35.427241 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 28 05:17:35.428125 (sd-merge)[1291]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-digitalocean.raw'. Oct 28 05:17:35.437006 (sd-merge)[1291]: Merged extensions into '/usr'. Oct 28 05:17:35.445208 systemd[1]: Reload requested from client PID 1244 ('systemd-sysext') (unit systemd-sysext.service)... Oct 28 05:17:35.445233 systemd[1]: Reloading... Oct 28 05:17:35.628124 zram_generator::config[1328]: No configuration found. Oct 28 05:17:35.662172 systemd-resolved[1285]: Positive Trust Anchors: Oct 28 05:17:35.662520 systemd-resolved[1285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 28 05:17:35.662593 systemd-resolved[1285]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 28 05:17:35.662664 systemd-resolved[1285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 28 05:17:35.680650 systemd-resolved[1285]: Using system hostname 'ci-4501.0.0-n-a8513f8a3e'. Oct 28 05:17:35.875545 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 28 05:17:35.875677 systemd[1]: Reloading finished in 429 ms. Oct 28 05:17:35.892157 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 28 05:17:35.893171 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 28 05:17:35.896602 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 28 05:17:35.906259 systemd[1]: Starting ensure-sysext.service... Oct 28 05:17:35.909300 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 28 05:17:35.947696 systemd[1]: Reload requested from client PID 1367 ('systemctl') (unit ensure-sysext.service)... Oct 28 05:17:35.947735 systemd[1]: Reloading... Oct 28 05:17:35.988708 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 28 05:17:35.988742 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 28 05:17:35.989127 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 28 05:17:35.989486 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 28 05:17:35.992560 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 28 05:17:35.992840 systemd-tmpfiles[1368]: ACLs are not supported, ignoring. Oct 28 05:17:35.992901 systemd-tmpfiles[1368]: ACLs are not supported, ignoring. Oct 28 05:17:36.002623 systemd-tmpfiles[1368]: Detected autofs mount point /boot during canonicalization of boot. Oct 28 05:17:36.002869 systemd-tmpfiles[1368]: Skipping /boot Oct 28 05:17:36.066115 zram_generator::config[1394]: No configuration found. Oct 28 05:17:36.066914 systemd-tmpfiles[1368]: Detected autofs mount point /boot during canonicalization of boot. Oct 28 05:17:36.070122 systemd-tmpfiles[1368]: Skipping /boot Oct 28 05:17:36.332813 systemd[1]: Reloading finished in 384 ms. Oct 28 05:17:36.349383 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 28 05:17:36.370463 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 28 05:17:36.381200 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 28 05:17:36.385334 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 28 05:17:36.387717 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 28 05:17:36.397458 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 28 05:17:36.401972 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 28 05:17:36.405524 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 28 05:17:36.411069 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 05:17:36.411275 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 05:17:36.414334 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 28 05:17:36.418397 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 28 05:17:36.428916 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 28 05:17:36.430464 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 05:17:36.430608 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 05:17:36.430710 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 05:17:36.438138 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 05:17:36.438335 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 05:17:36.438504 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 05:17:36.438622 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 05:17:36.438706 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 05:17:36.451722 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 05:17:36.453246 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 05:17:36.460242 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 28 05:17:36.462004 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 05:17:36.462177 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 05:17:36.462316 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 05:17:36.474230 systemd[1]: Finished ensure-sysext.service. Oct 28 05:17:36.476640 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 28 05:17:36.476811 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 28 05:17:36.485926 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 28 05:17:36.486572 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 28 05:17:36.490888 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 28 05:17:36.500637 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 28 05:17:36.502237 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 28 05:17:36.502412 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 28 05:17:36.505452 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 28 05:17:36.506091 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 28 05:17:36.518003 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 28 05:17:36.525343 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 28 05:17:36.552742 systemd-udevd[1448]: Using default interface naming scheme 'v257'. Oct 28 05:17:36.562186 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 28 05:17:36.584643 augenrules[1482]: No rules Oct 28 05:17:36.584943 systemd[1]: audit-rules.service: Deactivated successfully. Oct 28 05:17:36.585314 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 28 05:17:36.606036 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 28 05:17:36.607917 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 28 05:17:36.614685 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 28 05:17:36.619873 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 28 05:17:36.681046 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 28 05:17:36.682271 systemd[1]: Reached target time-set.target - System Time Set. Oct 28 05:17:36.801697 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Oct 28 05:17:36.803815 systemd-networkd[1491]: lo: Link UP Oct 28 05:17:36.803825 systemd-networkd[1491]: lo: Gained carrier Oct 28 05:17:36.808233 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Oct 28 05:17:36.808756 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 05:17:36.808926 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 05:17:36.812498 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 28 05:17:36.819419 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 28 05:17:36.828864 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 28 05:17:36.830219 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 05:17:36.830281 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 05:17:36.830323 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 28 05:17:36.830346 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 05:17:36.830640 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 28 05:17:36.832448 systemd[1]: Reached target network.target - Network. Oct 28 05:17:36.837878 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 28 05:17:36.845488 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 28 05:17:36.900058 kernel: ISO 9660 Extensions: RRIP_1991A Oct 28 05:17:36.906050 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Oct 28 05:17:36.911000 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 28 05:17:36.912844 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 28 05:17:36.932730 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 28 05:17:36.938920 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 28 05:17:36.941101 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 28 05:17:36.942408 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 28 05:17:36.947379 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 28 05:17:36.947579 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 28 05:17:36.948558 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 28 05:17:36.953114 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 28 05:17:37.028432 systemd-networkd[1491]: eth1: Configuring with /run/systemd/network/10-c6:94:58:5f:bf:ae.network. Oct 28 05:17:37.029266 systemd-networkd[1491]: eth1: Link UP Oct 28 05:17:37.030265 systemd-networkd[1491]: eth1: Gained carrier Oct 28 05:17:37.042444 systemd-networkd[1491]: eth0: Configuring with /run/systemd/network/10-62:96:45:42:8a:17.network. Oct 28 05:17:37.044212 systemd-networkd[1491]: eth0: Link UP Oct 28 05:17:37.044938 systemd-networkd[1491]: eth0: Gained carrier Oct 28 05:17:37.045336 systemd-timesyncd[1463]: Network configuration changed, trying to establish connection. Oct 28 05:17:37.051176 systemd-timesyncd[1463]: Network configuration changed, trying to establish connection. Oct 28 05:17:37.116052 kernel: mousedev: PS/2 mouse device common for all mice Oct 28 05:17:37.143068 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 28 05:17:37.157871 kernel: ACPI: button: Power Button [PWRF] Oct 28 05:17:37.157955 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 28 05:17:37.172627 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 28 05:17:37.224134 ldconfig[1445]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 28 05:17:37.231141 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 28 05:17:37.235702 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 28 05:17:37.275752 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 28 05:17:37.278365 systemd[1]: Reached target sysinit.target - System Initialization. Oct 28 05:17:37.279001 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 28 05:17:37.280378 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 28 05:17:37.282164 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 28 05:17:37.282876 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 28 05:17:37.283781 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 28 05:17:37.285486 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 28 05:17:37.286016 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 28 05:17:37.286060 systemd[1]: Reached target paths.target - Path Units. Oct 28 05:17:37.287118 systemd[1]: Reached target timers.target - Timer Units. Oct 28 05:17:37.288670 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 28 05:17:37.291977 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 28 05:17:37.298756 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 28 05:17:37.300136 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 28 05:17:37.301242 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 28 05:17:37.322833 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 28 05:17:37.324959 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 28 05:17:37.326928 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 28 05:17:37.334112 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 28 05:17:37.335247 systemd[1]: Reached target sockets.target - Socket Units. Oct 28 05:17:37.336673 systemd[1]: Reached target basic.target - Basic System. Oct 28 05:17:37.338230 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 28 05:17:37.338260 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 28 05:17:37.339638 systemd[1]: Starting containerd.service - containerd container runtime... Oct 28 05:17:37.344881 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 28 05:17:37.349338 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 28 05:17:37.358367 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 28 05:17:37.364270 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 28 05:17:37.408752 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 28 05:17:37.409330 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 28 05:17:37.417259 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 28 05:17:37.424981 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 28 05:17:37.428296 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 28 05:17:37.430338 jq[1557]: false Oct 28 05:17:37.435983 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 28 05:17:37.444456 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 28 05:17:37.450895 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 28 05:17:37.457180 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 28 05:17:37.457689 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 28 05:17:37.458328 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 28 05:17:37.461916 systemd[1]: Starting update-engine.service - Update Engine... Oct 28 05:17:37.465221 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 28 05:17:37.466711 extend-filesystems[1558]: Found /dev/vda6 Oct 28 05:17:37.469147 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 28 05:17:37.474433 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 28 05:17:37.485078 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Refreshing passwd entry cache Oct 28 05:17:37.485396 extend-filesystems[1558]: Found /dev/vda9 Oct 28 05:17:37.481872 oslogin_cache_refresh[1559]: Refreshing passwd entry cache Oct 28 05:17:37.497072 extend-filesystems[1558]: Checking size of /dev/vda9 Oct 28 05:17:37.491185 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 28 05:17:37.489832 oslogin_cache_refresh[1559]: Failure getting users, quitting Oct 28 05:17:37.497861 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Failure getting users, quitting Oct 28 05:17:37.497861 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 28 05:17:37.497861 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Refreshing group entry cache Oct 28 05:17:37.497861 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Failure getting groups, quitting Oct 28 05:17:37.497861 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 28 05:17:37.489860 oslogin_cache_refresh[1559]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 28 05:17:37.489936 oslogin_cache_refresh[1559]: Refreshing group entry cache Oct 28 05:17:37.492216 oslogin_cache_refresh[1559]: Failure getting groups, quitting Oct 28 05:17:37.492236 oslogin_cache_refresh[1559]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 28 05:17:37.500696 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 28 05:17:37.521212 extend-filesystems[1558]: Resized partition /dev/vda9 Oct 28 05:17:37.502126 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 28 05:17:37.527129 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 28 05:17:37.535370 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 05:17:37.554836 extend-filesystems[1589]: resize2fs 1.47.3 (8-Jul-2025) Oct 28 05:17:37.548977 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 28 05:17:37.562230 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 28 05:17:37.573182 coreos-metadata[1554]: Oct 28 05:17:37.569 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 28 05:17:37.573452 jq[1572]: true Oct 28 05:17:37.584220 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 14138363 blocks Oct 28 05:17:37.588654 coreos-metadata[1554]: Oct 28 05:17:37.587 INFO Fetch successful Oct 28 05:17:37.618584 jq[1593]: true Oct 28 05:17:37.618888 update_engine[1571]: I20251028 05:17:37.618391 1571 main.cc:92] Flatcar Update Engine starting Oct 28 05:17:37.648497 tar[1577]: linux-amd64/LICENSE Oct 28 05:17:37.656119 tar[1577]: linux-amd64/helm Oct 28 05:17:37.673121 dbus-daemon[1555]: [system] SELinux support is enabled Oct 28 05:17:37.678370 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 28 05:17:37.689419 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 28 05:17:37.689454 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 28 05:17:37.691133 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 28 05:17:37.691559 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Oct 28 05:17:37.691589 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 28 05:17:37.712087 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Oct 28 05:17:37.712179 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Oct 28 05:17:37.727593 kernel: Console: switching to colour dummy device 80x25 Oct 28 05:17:37.727714 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 28 05:17:37.727734 kernel: [drm] features: -context_init Oct 28 05:17:37.727749 kernel: [drm] number of scanouts: 1 Oct 28 05:17:37.727763 kernel: [drm] number of cap sets: 0 Oct 28 05:17:37.727777 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Oct 28 05:17:37.733329 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Oct 28 05:17:37.733483 kernel: Console: switching to colour frame buffer device 128x48 Oct 28 05:17:37.741064 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 28 05:17:37.753166 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 28 05:17:37.762183 update_engine[1571]: I20251028 05:17:37.760491 1571 update_check_scheduler.cc:74] Next update check in 2m44s Oct 28 05:17:37.754906 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 28 05:17:37.757114 systemd[1]: Started update-engine.service - Update Engine. Oct 28 05:17:37.769071 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Oct 28 05:17:37.795602 extend-filesystems[1589]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 28 05:17:37.795602 extend-filesystems[1589]: old_desc_blocks = 1, new_desc_blocks = 7 Oct 28 05:17:37.795602 extend-filesystems[1589]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Oct 28 05:17:37.797807 extend-filesystems[1558]: Resized filesystem in /dev/vda9 Oct 28 05:17:37.798278 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 28 05:17:37.803154 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 28 05:17:37.803393 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 28 05:17:37.829614 bash[1635]: Updated "/home/core/.ssh/authorized_keys" Oct 28 05:17:37.949516 systemd-logind[1570]: New seat seat0. Oct 28 05:17:37.961686 systemd-logind[1570]: Watching system buttons on /dev/input/event2 (Power Button) Oct 28 05:17:37.961713 systemd-logind[1570]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 28 05:17:37.988204 systemd[1]: Started systemd-logind.service - User Login Management. Oct 28 05:17:37.989787 systemd[1]: motdgen.service: Deactivated successfully. Oct 28 05:17:37.990467 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 28 05:17:37.992239 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 05:17:37.995949 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 28 05:17:38.016418 systemd[1]: Starting sshkeys.service... Oct 28 05:17:38.026087 kernel: EDAC MC: Ver: 3.0.0 Oct 28 05:17:38.163915 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 28 05:17:38.170950 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 28 05:17:38.173062 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 28 05:17:38.173411 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 05:17:38.173599 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 05:17:38.176318 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 05:17:38.248188 systemd-networkd[1491]: eth1: Gained IPv6LL Oct 28 05:17:38.253190 locksmithd[1616]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 28 05:17:38.254577 systemd-timesyncd[1463]: Network configuration changed, trying to establish connection. Oct 28 05:17:38.258531 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 28 05:17:38.259965 systemd[1]: Reached target network-online.target - Network is Online. Oct 28 05:17:38.267752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 05:17:38.274659 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 28 05:17:38.377199 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 28 05:17:38.380577 coreos-metadata[1656]: Oct 28 05:17:38.377 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 28 05:17:38.395071 coreos-metadata[1656]: Oct 28 05:17:38.394 INFO Fetch successful Oct 28 05:17:38.406642 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 05:17:38.423333 unknown[1656]: wrote ssh authorized keys file for user: core Oct 28 05:17:38.480051 containerd[1596]: time="2025-10-28T05:17:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 28 05:17:38.484960 containerd[1596]: time="2025-10-28T05:17:38.484596480Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 28 05:17:38.487265 update-ssh-keys[1679]: Updated "/home/core/.ssh/authorized_keys" Oct 28 05:17:38.490136 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 28 05:17:38.493373 systemd[1]: Finished sshkeys.service. Oct 28 05:17:38.532713 containerd[1596]: time="2025-10-28T05:17:38.532562888Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.821µs" Oct 28 05:17:38.533042 containerd[1596]: time="2025-10-28T05:17:38.533003388Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 28 05:17:38.533132 containerd[1596]: time="2025-10-28T05:17:38.533118683Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 28 05:17:38.533525 containerd[1596]: time="2025-10-28T05:17:38.533500816Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 28 05:17:38.533667 containerd[1596]: time="2025-10-28T05:17:38.533642988Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 28 05:17:38.533762 containerd[1596]: time="2025-10-28T05:17:38.533748430Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 28 05:17:38.533893 containerd[1596]: time="2025-10-28T05:17:38.533875863Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 28 05:17:38.533946 containerd[1596]: time="2025-10-28T05:17:38.533936737Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 28 05:17:38.537225 containerd[1596]: time="2025-10-28T05:17:38.534944327Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 28 05:17:38.537225 containerd[1596]: time="2025-10-28T05:17:38.535703970Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 28 05:17:38.537225 containerd[1596]: time="2025-10-28T05:17:38.535732533Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 28 05:17:38.537225 containerd[1596]: time="2025-10-28T05:17:38.535742221Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 28 05:17:38.537225 containerd[1596]: time="2025-10-28T05:17:38.536413161Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 28 05:17:38.537225 containerd[1596]: time="2025-10-28T05:17:38.536742154Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 28 05:17:38.537225 containerd[1596]: time="2025-10-28T05:17:38.536777899Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 28 05:17:38.537225 containerd[1596]: time="2025-10-28T05:17:38.536790811Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 28 05:17:38.537225 containerd[1596]: time="2025-10-28T05:17:38.536821701Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 28 05:17:38.540713 containerd[1596]: time="2025-10-28T05:17:38.540252874Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 28 05:17:38.540713 containerd[1596]: time="2025-10-28T05:17:38.540415655Z" level=info msg="metadata content store policy set" policy=shared Oct 28 05:17:38.548588 containerd[1596]: time="2025-10-28T05:17:38.547110954Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 28 05:17:38.548588 containerd[1596]: time="2025-10-28T05:17:38.547233815Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 28 05:17:38.548588 containerd[1596]: time="2025-10-28T05:17:38.547250541Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 28 05:17:38.548588 containerd[1596]: time="2025-10-28T05:17:38.547265263Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 28 05:17:38.548588 containerd[1596]: time="2025-10-28T05:17:38.547387011Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 28 05:17:38.548588 containerd[1596]: time="2025-10-28T05:17:38.547401059Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 28 05:17:38.548588 containerd[1596]: time="2025-10-28T05:17:38.547433594Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 28 05:17:38.548588 containerd[1596]: time="2025-10-28T05:17:38.547447986Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 28 05:17:38.548588 containerd[1596]: time="2025-10-28T05:17:38.547470476Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 28 05:17:38.548588 containerd[1596]: time="2025-10-28T05:17:38.547481264Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 28 05:17:38.548588 containerd[1596]: time="2025-10-28T05:17:38.547491033Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 28 05:17:38.548588 containerd[1596]: time="2025-10-28T05:17:38.547539339Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 28 05:17:38.548588 containerd[1596]: time="2025-10-28T05:17:38.547738905Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 28 05:17:38.548588 containerd[1596]: time="2025-10-28T05:17:38.547772193Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 28 05:17:38.549044 containerd[1596]: time="2025-10-28T05:17:38.547788401Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 28 05:17:38.549044 containerd[1596]: time="2025-10-28T05:17:38.547853064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 28 05:17:38.549044 containerd[1596]: time="2025-10-28T05:17:38.547872806Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 28 05:17:38.549044 containerd[1596]: time="2025-10-28T05:17:38.547883068Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 28 05:17:38.549044 containerd[1596]: time="2025-10-28T05:17:38.547894251Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 28 05:17:38.549044 containerd[1596]: time="2025-10-28T05:17:38.547904288Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 28 05:17:38.549044 containerd[1596]: time="2025-10-28T05:17:38.547927540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 28 05:17:38.549044 containerd[1596]: time="2025-10-28T05:17:38.547938178Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 28 05:17:38.549044 containerd[1596]: time="2025-10-28T05:17:38.547961989Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 28 05:17:38.549044 containerd[1596]: time="2025-10-28T05:17:38.548084720Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 28 05:17:38.549044 containerd[1596]: time="2025-10-28T05:17:38.548100533Z" level=info msg="Start snapshots syncer" Oct 28 05:17:38.549851 containerd[1596]: time="2025-10-28T05:17:38.549439549Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 28 05:17:38.550539 containerd[1596]: time="2025-10-28T05:17:38.549992952Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 28 05:17:38.550539 containerd[1596]: time="2025-10-28T05:17:38.550348556Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 28 05:17:38.550755 containerd[1596]: time="2025-10-28T05:17:38.550492763Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 28 05:17:38.551004 containerd[1596]: time="2025-10-28T05:17:38.550980524Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 28 05:17:38.553147 containerd[1596]: time="2025-10-28T05:17:38.552116010Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 28 05:17:38.553147 containerd[1596]: time="2025-10-28T05:17:38.552145324Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 28 05:17:38.553147 containerd[1596]: time="2025-10-28T05:17:38.552159418Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 28 05:17:38.553147 containerd[1596]: time="2025-10-28T05:17:38.552207158Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 28 05:17:38.553147 containerd[1596]: time="2025-10-28T05:17:38.552220319Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 28 05:17:38.553147 containerd[1596]: time="2025-10-28T05:17:38.552232018Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 28 05:17:38.553147 containerd[1596]: time="2025-10-28T05:17:38.552259653Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 28 05:17:38.553147 containerd[1596]: time="2025-10-28T05:17:38.552270597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 28 05:17:38.553147 containerd[1596]: time="2025-10-28T05:17:38.552292103Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 28 05:17:38.557221 containerd[1596]: time="2025-10-28T05:17:38.553586358Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 28 05:17:38.557221 containerd[1596]: time="2025-10-28T05:17:38.553736188Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 28 05:17:38.557221 containerd[1596]: time="2025-10-28T05:17:38.553756355Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 28 05:17:38.557221 containerd[1596]: time="2025-10-28T05:17:38.553772592Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 28 05:17:38.557221 containerd[1596]: time="2025-10-28T05:17:38.553785270Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 28 05:17:38.557221 containerd[1596]: time="2025-10-28T05:17:38.553800188Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 28 05:17:38.557221 containerd[1596]: time="2025-10-28T05:17:38.553816148Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 28 05:17:38.557221 containerd[1596]: time="2025-10-28T05:17:38.553839734Z" level=info msg="runtime interface created" Oct 28 05:17:38.557221 containerd[1596]: time="2025-10-28T05:17:38.553847560Z" level=info msg="created NRI interface" Oct 28 05:17:38.557221 containerd[1596]: time="2025-10-28T05:17:38.553883143Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 28 05:17:38.557221 containerd[1596]: time="2025-10-28T05:17:38.553905376Z" level=info msg="Connect containerd service" Oct 28 05:17:38.557221 containerd[1596]: time="2025-10-28T05:17:38.553951892Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 28 05:17:38.569622 containerd[1596]: time="2025-10-28T05:17:38.569556766Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 28 05:17:38.761710 systemd-networkd[1491]: eth0: Gained IPv6LL Oct 28 05:17:38.762489 systemd-timesyncd[1463]: Network configuration changed, trying to establish connection. Oct 28 05:17:38.772716 sshd_keygen[1590]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 28 05:17:38.842618 containerd[1596]: time="2025-10-28T05:17:38.841740245Z" level=info msg="Start subscribing containerd event" Oct 28 05:17:38.842618 containerd[1596]: time="2025-10-28T05:17:38.841813615Z" level=info msg="Start recovering state" Oct 28 05:17:38.842618 containerd[1596]: time="2025-10-28T05:17:38.841930081Z" level=info msg="Start event monitor" Oct 28 05:17:38.842618 containerd[1596]: time="2025-10-28T05:17:38.841945655Z" level=info msg="Start cni network conf syncer for default" Oct 28 05:17:38.842618 containerd[1596]: time="2025-10-28T05:17:38.841953066Z" level=info msg="Start streaming server" Oct 28 05:17:38.842618 containerd[1596]: time="2025-10-28T05:17:38.841972451Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 28 05:17:38.842618 containerd[1596]: time="2025-10-28T05:17:38.841980682Z" level=info msg="runtime interface starting up..." Oct 28 05:17:38.842618 containerd[1596]: time="2025-10-28T05:17:38.841986838Z" level=info msg="starting plugins..." Oct 28 05:17:38.842618 containerd[1596]: time="2025-10-28T05:17:38.842000496Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 28 05:17:38.843643 containerd[1596]: time="2025-10-28T05:17:38.843110829Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 28 05:17:38.843643 containerd[1596]: time="2025-10-28T05:17:38.843357388Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 28 05:17:38.843643 containerd[1596]: time="2025-10-28T05:17:38.843445196Z" level=info msg="containerd successfully booted in 0.364049s" Oct 28 05:17:38.845286 systemd[1]: Started containerd.service - containerd container runtime. Oct 28 05:17:38.872952 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 28 05:17:38.881362 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 28 05:17:38.912854 systemd[1]: issuegen.service: Deactivated successfully. Oct 28 05:17:38.914648 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 28 05:17:38.921307 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 28 05:17:38.968315 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 28 05:17:38.975086 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 28 05:17:38.979586 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 28 05:17:38.982344 systemd[1]: Reached target getty.target - Login Prompts. Oct 28 05:17:39.052178 tar[1577]: linux-amd64/README.md Oct 28 05:17:39.078855 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 28 05:17:39.246385 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 28 05:17:39.251462 systemd[1]: Started sshd@0-164.92.80.11:22-139.178.89.65:36822.service - OpenSSH per-connection server daemon (139.178.89.65:36822). Oct 28 05:17:39.373513 sshd[1717]: Accepted publickey for core from 139.178.89.65 port 36822 ssh2: RSA SHA256:fMMt40OOzhunw8taaYk6/dlxAvnUrQ4UxcBuxtZpXJk Oct 28 05:17:39.376620 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:17:39.390582 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 28 05:17:39.398555 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 28 05:17:39.417448 systemd-logind[1570]: New session 1 of user core. Oct 28 05:17:39.434448 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 28 05:17:39.441853 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 28 05:17:39.462678 (systemd)[1722]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 28 05:17:39.469902 systemd-logind[1570]: New session c1 of user core. Oct 28 05:17:39.672430 systemd[1722]: Queued start job for default target default.target. Oct 28 05:17:39.677943 systemd[1722]: Created slice app.slice - User Application Slice. Oct 28 05:17:39.677990 systemd[1722]: Reached target paths.target - Paths. Oct 28 05:17:39.678332 systemd[1722]: Reached target timers.target - Timers. Oct 28 05:17:39.681345 systemd[1722]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 28 05:17:39.711513 systemd[1722]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 28 05:17:39.711652 systemd[1722]: Reached target sockets.target - Sockets. Oct 28 05:17:39.711708 systemd[1722]: Reached target basic.target - Basic System. Oct 28 05:17:39.711748 systemd[1722]: Reached target default.target - Main User Target. Oct 28 05:17:39.711780 systemd[1722]: Startup finished in 227ms. Oct 28 05:17:39.712474 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 28 05:17:39.725445 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 28 05:17:39.765178 systemd[1]: Started sshd@1-164.92.80.11:22-139.178.89.65:36834.service - OpenSSH per-connection server daemon (139.178.89.65:36834). Oct 28 05:17:39.863248 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 05:17:39.866078 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 28 05:17:39.870666 sshd[1733]: Accepted publickey for core from 139.178.89.65 port 36834 ssh2: RSA SHA256:fMMt40OOzhunw8taaYk6/dlxAvnUrQ4UxcBuxtZpXJk Oct 28 05:17:39.870789 systemd[1]: Startup finished in 2.348s (kernel) + 7.338s (initrd) + 5.892s (userspace) = 15.578s. Oct 28 05:17:39.873713 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:17:39.881089 systemd-logind[1570]: New session 2 of user core. Oct 28 05:17:39.882538 (kubelet)[1741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 28 05:17:39.883770 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 28 05:17:39.908839 sshd[1742]: Connection closed by 139.178.89.65 port 36834 Oct 28 05:17:39.909421 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Oct 28 05:17:39.923214 systemd[1]: sshd@1-164.92.80.11:22-139.178.89.65:36834.service: Deactivated successfully. Oct 28 05:17:39.926201 systemd[1]: session-2.scope: Deactivated successfully. Oct 28 05:17:39.928143 systemd-logind[1570]: Session 2 logged out. Waiting for processes to exit. Oct 28 05:17:39.933097 systemd-logind[1570]: Removed session 2. Oct 28 05:17:39.935434 systemd[1]: Started sshd@2-164.92.80.11:22-139.178.89.65:36842.service - OpenSSH per-connection server daemon (139.178.89.65:36842). Oct 28 05:17:40.014567 sshd[1748]: Accepted publickey for core from 139.178.89.65 port 36842 ssh2: RSA SHA256:fMMt40OOzhunw8taaYk6/dlxAvnUrQ4UxcBuxtZpXJk Oct 28 05:17:40.016451 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:17:40.025893 systemd-logind[1570]: New session 3 of user core. Oct 28 05:17:40.030313 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 28 05:17:40.050127 sshd[1755]: Connection closed by 139.178.89.65 port 36842 Oct 28 05:17:40.050334 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Oct 28 05:17:40.063171 systemd[1]: sshd@2-164.92.80.11:22-139.178.89.65:36842.service: Deactivated successfully. Oct 28 05:17:40.067124 systemd[1]: session-3.scope: Deactivated successfully. Oct 28 05:17:40.070819 systemd-logind[1570]: Session 3 logged out. Waiting for processes to exit. Oct 28 05:17:40.073467 systemd[1]: Started sshd@3-164.92.80.11:22-139.178.89.65:36850.service - OpenSSH per-connection server daemon (139.178.89.65:36850). Oct 28 05:17:40.078770 systemd-logind[1570]: Removed session 3. Oct 28 05:17:40.156923 sshd[1761]: Accepted publickey for core from 139.178.89.65 port 36850 ssh2: RSA SHA256:fMMt40OOzhunw8taaYk6/dlxAvnUrQ4UxcBuxtZpXJk Oct 28 05:17:40.159417 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:17:40.168187 systemd-logind[1570]: New session 4 of user core. Oct 28 05:17:40.175308 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 28 05:17:40.200072 sshd[1764]: Connection closed by 139.178.89.65 port 36850 Oct 28 05:17:40.200729 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Oct 28 05:17:40.217250 systemd[1]: sshd@3-164.92.80.11:22-139.178.89.65:36850.service: Deactivated successfully. Oct 28 05:17:40.221000 systemd[1]: session-4.scope: Deactivated successfully. Oct 28 05:17:40.240116 systemd-logind[1570]: Session 4 logged out. Waiting for processes to exit. Oct 28 05:17:40.244344 systemd[1]: Started sshd@4-164.92.80.11:22-139.178.89.65:36866.service - OpenSSH per-connection server daemon (139.178.89.65:36866). Oct 28 05:17:40.251992 systemd-logind[1570]: Removed session 4. Oct 28 05:17:40.326207 sshd[1773]: Accepted publickey for core from 139.178.89.65 port 36866 ssh2: RSA SHA256:fMMt40OOzhunw8taaYk6/dlxAvnUrQ4UxcBuxtZpXJk Oct 28 05:17:40.326973 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:17:40.333835 systemd-logind[1570]: New session 5 of user core. Oct 28 05:17:40.342282 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 28 05:17:40.375195 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 28 05:17:40.375527 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 05:17:40.388518 sudo[1779]: pam_unix(sudo:session): session closed for user root Oct 28 05:17:40.394046 sshd[1778]: Connection closed by 139.178.89.65 port 36866 Oct 28 05:17:40.393094 sshd-session[1773]: pam_unix(sshd:session): session closed for user core Oct 28 05:17:40.405159 systemd[1]: sshd@4-164.92.80.11:22-139.178.89.65:36866.service: Deactivated successfully. Oct 28 05:17:40.409691 systemd[1]: session-5.scope: Deactivated successfully. Oct 28 05:17:40.412548 systemd-logind[1570]: Session 5 logged out. Waiting for processes to exit. Oct 28 05:17:40.416082 systemd-logind[1570]: Removed session 5. Oct 28 05:17:40.419406 systemd[1]: Started sshd@5-164.92.80.11:22-139.178.89.65:36872.service - OpenSSH per-connection server daemon (139.178.89.65:36872). Oct 28 05:17:40.497857 sshd[1785]: Accepted publickey for core from 139.178.89.65 port 36872 ssh2: RSA SHA256:fMMt40OOzhunw8taaYk6/dlxAvnUrQ4UxcBuxtZpXJk Oct 28 05:17:40.499046 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:17:40.506788 systemd-logind[1570]: New session 6 of user core. Oct 28 05:17:40.513360 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 28 05:17:40.523921 kubelet[1741]: E1028 05:17:40.523876 1741 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 28 05:17:40.527069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 28 05:17:40.527361 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 28 05:17:40.527759 systemd[1]: kubelet.service: Consumed 1.203s CPU time, 256.2M memory peak. Oct 28 05:17:40.536760 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 28 05:17:40.537418 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 05:17:40.542998 sudo[1792]: pam_unix(sudo:session): session closed for user root Oct 28 05:17:40.552888 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 28 05:17:40.553318 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 05:17:40.566208 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 28 05:17:40.632539 augenrules[1814]: No rules Oct 28 05:17:40.633447 systemd[1]: audit-rules.service: Deactivated successfully. Oct 28 05:17:40.633906 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 28 05:17:40.635615 sudo[1790]: pam_unix(sudo:session): session closed for user root Oct 28 05:17:40.639107 sshd[1789]: Connection closed by 139.178.89.65 port 36872 Oct 28 05:17:40.640356 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Oct 28 05:17:40.654981 systemd[1]: sshd@5-164.92.80.11:22-139.178.89.65:36872.service: Deactivated successfully. Oct 28 05:17:40.658018 systemd[1]: session-6.scope: Deactivated successfully. Oct 28 05:17:40.659132 systemd-logind[1570]: Session 6 logged out. Waiting for processes to exit. Oct 28 05:17:40.663744 systemd[1]: Started sshd@6-164.92.80.11:22-139.178.89.65:36888.service - OpenSSH per-connection server daemon (139.178.89.65:36888). Oct 28 05:17:40.665849 systemd-logind[1570]: Removed session 6. Oct 28 05:17:40.735175 sshd[1823]: Accepted publickey for core from 139.178.89.65 port 36888 ssh2: RSA SHA256:fMMt40OOzhunw8taaYk6/dlxAvnUrQ4UxcBuxtZpXJk Oct 28 05:17:40.737071 sshd-session[1823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:17:40.742913 systemd-logind[1570]: New session 7 of user core. Oct 28 05:17:40.753302 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 28 05:17:40.771188 sudo[1827]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 28 05:17:40.771630 sudo[1827]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 05:17:41.375672 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 28 05:17:41.394540 (dockerd)[1845]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 28 05:17:41.773618 dockerd[1845]: time="2025-10-28T05:17:41.773474463Z" level=info msg="Starting up" Oct 28 05:17:41.778522 dockerd[1845]: time="2025-10-28T05:17:41.778459880Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 28 05:17:41.801779 dockerd[1845]: time="2025-10-28T05:17:41.801656708Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 28 05:17:41.889458 systemd[1]: var-lib-docker-metacopy\x2dcheck924701419-merged.mount: Deactivated successfully. Oct 28 05:17:41.905482 dockerd[1845]: time="2025-10-28T05:17:41.905251781Z" level=info msg="Loading containers: start." Oct 28 05:17:41.920158 kernel: Initializing XFRM netlink socket Oct 28 05:17:42.166109 systemd-timesyncd[1463]: Network configuration changed, trying to establish connection. Oct 28 05:17:42.167471 systemd-timesyncd[1463]: Network configuration changed, trying to establish connection. Oct 28 05:17:42.179913 systemd-timesyncd[1463]: Network configuration changed, trying to establish connection. Oct 28 05:17:42.216840 systemd-networkd[1491]: docker0: Link UP Oct 28 05:17:42.217640 systemd-timesyncd[1463]: Network configuration changed, trying to establish connection. Oct 28 05:17:42.220402 dockerd[1845]: time="2025-10-28T05:17:42.220362727Z" level=info msg="Loading containers: done." Oct 28 05:17:42.237281 dockerd[1845]: time="2025-10-28T05:17:42.236857331Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 28 05:17:42.237281 dockerd[1845]: time="2025-10-28T05:17:42.236969553Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 28 05:17:42.237281 dockerd[1845]: time="2025-10-28T05:17:42.237115869Z" level=info msg="Initializing buildkit" Oct 28 05:17:42.240795 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2881853038-merged.mount: Deactivated successfully. Oct 28 05:17:42.262380 dockerd[1845]: time="2025-10-28T05:17:42.262327652Z" level=info msg="Completed buildkit initialization" Oct 28 05:17:42.271571 dockerd[1845]: time="2025-10-28T05:17:42.271507100Z" level=info msg="Daemon has completed initialization" Oct 28 05:17:42.271744 dockerd[1845]: time="2025-10-28T05:17:42.271603043Z" level=info msg="API listen on /run/docker.sock" Oct 28 05:17:42.272396 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 28 05:17:42.988650 containerd[1596]: time="2025-10-28T05:17:42.988602623Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 28 05:17:43.526424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount355039944.mount: Deactivated successfully. Oct 28 05:17:44.760664 containerd[1596]: time="2025-10-28T05:17:44.760585001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:17:44.762062 containerd[1596]: time="2025-10-28T05:17:44.762010096Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Oct 28 05:17:44.762733 containerd[1596]: time="2025-10-28T05:17:44.762696881Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:17:44.766066 containerd[1596]: time="2025-10-28T05:17:44.765634130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:17:44.768001 containerd[1596]: time="2025-10-28T05:17:44.767857176Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.77920689s" Oct 28 05:17:44.768001 containerd[1596]: time="2025-10-28T05:17:44.767905463Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Oct 28 05:17:44.768794 containerd[1596]: time="2025-10-28T05:17:44.768736896Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 28 05:17:46.278275 containerd[1596]: time="2025-10-28T05:17:46.278205925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:17:46.279477 containerd[1596]: time="2025-10-28T05:17:46.279434699Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Oct 28 05:17:46.280161 containerd[1596]: time="2025-10-28T05:17:46.279563214Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:17:46.282244 containerd[1596]: time="2025-10-28T05:17:46.282092389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:17:46.283707 containerd[1596]: time="2025-10-28T05:17:46.283664910Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.514897958s" Oct 28 05:17:46.284105 containerd[1596]: time="2025-10-28T05:17:46.283867414Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Oct 28 05:17:46.284720 containerd[1596]: time="2025-10-28T05:17:46.284673875Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 28 05:17:47.473494 containerd[1596]: time="2025-10-28T05:17:47.472453939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:17:47.473494 containerd[1596]: time="2025-10-28T05:17:47.473178276Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Oct 28 05:17:47.474158 containerd[1596]: time="2025-10-28T05:17:47.473768689Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:17:47.478001 containerd[1596]: time="2025-10-28T05:17:47.477940727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:17:47.479792 containerd[1596]: time="2025-10-28T05:17:47.479734844Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.195014388s" Oct 28 05:17:47.480006 containerd[1596]: time="2025-10-28T05:17:47.479983294Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Oct 28 05:17:47.480701 containerd[1596]: time="2025-10-28T05:17:47.480644173Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 28 05:17:48.630153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1398738684.mount: Deactivated successfully. Oct 28 05:17:49.030006 containerd[1596]: time="2025-10-28T05:17:49.029846605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:17:49.031265 containerd[1596]: time="2025-10-28T05:17:49.030661028Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Oct 28 05:17:49.032369 containerd[1596]: time="2025-10-28T05:17:49.031837096Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:17:49.033827 containerd[1596]: time="2025-10-28T05:17:49.033783379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:17:49.034338 containerd[1596]: time="2025-10-28T05:17:49.034307425Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.553167145s" Oct 28 05:17:49.034338 containerd[1596]: time="2025-10-28T05:17:49.034338270Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Oct 28 05:17:49.035141 containerd[1596]: time="2025-10-28T05:17:49.035089632Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 28 05:17:49.136140 systemd-resolved[1285]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Oct 28 05:17:49.539096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3131390825.mount: Deactivated successfully. Oct 28 05:17:50.606063 containerd[1596]: time="2025-10-28T05:17:50.604797631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:17:50.606063 containerd[1596]: time="2025-10-28T05:17:50.605815704Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Oct 28 05:17:50.606625 containerd[1596]: time="2025-10-28T05:17:50.606246775Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:17:50.610045 containerd[1596]: time="2025-10-28T05:17:50.609950444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:17:50.611337 containerd[1596]: time="2025-10-28T05:17:50.611183751Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.575919381s" Oct 28 05:17:50.611337 containerd[1596]: time="2025-10-28T05:17:50.611227788Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Oct 28 05:17:50.612011 containerd[1596]: time="2025-10-28T05:17:50.611982728Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 28 05:17:50.777803 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 28 05:17:50.780004 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 05:17:50.947560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 05:17:50.961514 (kubelet)[2201]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 28 05:17:51.027532 kubelet[2201]: E1028 05:17:51.027429 2201 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 28 05:17:51.031510 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 28 05:17:51.031916 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 28 05:17:51.032790 systemd[1]: kubelet.service: Consumed 214ms CPU time, 110.4M memory peak. Oct 28 05:17:51.080113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount745845436.mount: Deactivated successfully. Oct 28 05:17:51.084668 containerd[1596]: time="2025-10-28T05:17:51.084604515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:17:51.086157 containerd[1596]: time="2025-10-28T05:17:51.086107648Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Oct 28 05:17:51.086601 containerd[1596]: time="2025-10-28T05:17:51.086565390Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:17:51.089905 containerd[1596]: time="2025-10-28T05:17:51.089832222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:17:51.090793 containerd[1596]: time="2025-10-28T05:17:51.090437985Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 478.419055ms" Oct 28 05:17:51.090793 containerd[1596]: time="2025-10-28T05:17:51.090482638Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Oct 28 05:17:51.091418 containerd[1596]: time="2025-10-28T05:17:51.091379153Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 28 05:17:52.200277 systemd-resolved[1285]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Oct 28 05:17:54.109198 containerd[1596]: time="2025-10-28T05:17:54.109083419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:17:54.110694 containerd[1596]: time="2025-10-28T05:17:54.110640554Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Oct 28 05:17:54.111236 containerd[1596]: time="2025-10-28T05:17:54.111199868Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:17:54.118051 containerd[1596]: time="2025-10-28T05:17:54.117898388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:17:54.120821 containerd[1596]: time="2025-10-28T05:17:54.120283593Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.028869183s" Oct 28 05:17:54.120821 containerd[1596]: time="2025-10-28T05:17:54.120337008Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Oct 28 05:17:58.854088 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 05:17:58.854361 systemd[1]: kubelet.service: Consumed 214ms CPU time, 110.4M memory peak. Oct 28 05:17:58.858005 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 05:17:58.904965 systemd[1]: Reload requested from client PID 2281 ('systemctl') (unit session-7.scope)... Oct 28 05:17:58.904992 systemd[1]: Reloading... Oct 28 05:17:59.087083 zram_generator::config[2325]: No configuration found. Oct 28 05:17:59.372752 systemd[1]: Reloading finished in 466 ms. Oct 28 05:17:59.422756 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 28 05:17:59.422940 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 28 05:17:59.423427 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 05:17:59.423661 systemd[1]: kubelet.service: Consumed 157ms CPU time, 97.9M memory peak. Oct 28 05:17:59.426494 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 05:17:59.626826 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 05:17:59.640971 (kubelet)[2377]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 28 05:17:59.717779 kubelet[2377]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 28 05:17:59.717779 kubelet[2377]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 28 05:17:59.720564 kubelet[2377]: I1028 05:17:59.720440 2377 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 28 05:18:00.010092 kubelet[2377]: I1028 05:18:00.009629 2377 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 28 05:18:00.010092 kubelet[2377]: I1028 05:18:00.009690 2377 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 28 05:18:00.014066 kubelet[2377]: I1028 05:18:00.012103 2377 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 28 05:18:00.014066 kubelet[2377]: I1028 05:18:00.012153 2377 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 28 05:18:00.014066 kubelet[2377]: I1028 05:18:00.012539 2377 server.go:956] "Client rotation is on, will bootstrap in background" Oct 28 05:18:00.028761 kubelet[2377]: E1028 05:18:00.028717 2377 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://164.92.80.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 164.92.80.11:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 28 05:18:00.028956 kubelet[2377]: I1028 05:18:00.028793 2377 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 28 05:18:00.041309 kubelet[2377]: I1028 05:18:00.041281 2377 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 28 05:18:00.049466 kubelet[2377]: I1028 05:18:00.049417 2377 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 28 05:18:00.050870 kubelet[2377]: I1028 05:18:00.050746 2377 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 28 05:18:00.052626 kubelet[2377]: I1028 05:18:00.050863 2377 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4501.0.0-n-a8513f8a3e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 28 05:18:00.052626 kubelet[2377]: I1028 05:18:00.052615 2377 topology_manager.go:138] "Creating topology manager with none policy" Oct 28 05:18:00.052626 kubelet[2377]: I1028 05:18:00.052630 2377 container_manager_linux.go:306] "Creating device plugin manager" Oct 28 05:18:00.052932 kubelet[2377]: I1028 05:18:00.052753 2377 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 28 05:18:00.055372 kubelet[2377]: I1028 05:18:00.055340 2377 state_mem.go:36] "Initialized new in-memory state store" Oct 28 05:18:00.056205 kubelet[2377]: I1028 05:18:00.056180 2377 kubelet.go:475] "Attempting to sync node with API server" Oct 28 05:18:00.056301 kubelet[2377]: I1028 05:18:00.056216 2377 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 28 05:18:00.057252 kubelet[2377]: I1028 05:18:00.056780 2377 kubelet.go:387] "Adding apiserver pod source" Oct 28 05:18:00.057252 kubelet[2377]: I1028 05:18:00.056820 2377 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 28 05:18:00.058072 kubelet[2377]: E1028 05:18:00.058011 2377 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://164.92.80.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4501.0.0-n-a8513f8a3e&limit=500&resourceVersion=0\": dial tcp 164.92.80.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 28 05:18:00.060817 kubelet[2377]: I1028 05:18:00.060723 2377 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 28 05:18:00.065044 kubelet[2377]: I1028 05:18:00.064545 2377 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 28 05:18:00.065044 kubelet[2377]: I1028 05:18:00.064588 2377 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 28 05:18:00.065044 kubelet[2377]: W1028 05:18:00.064650 2377 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 28 05:18:00.069934 kubelet[2377]: I1028 05:18:00.069900 2377 server.go:1262] "Started kubelet" Oct 28 05:18:00.070191 kubelet[2377]: E1028 05:18:00.070167 2377 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://164.92.80.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 164.92.80.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 28 05:18:00.071474 kubelet[2377]: I1028 05:18:00.071427 2377 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 28 05:18:00.073013 kubelet[2377]: I1028 05:18:00.072981 2377 server.go:310] "Adding debug handlers to kubelet server" Oct 28 05:18:00.075526 kubelet[2377]: I1028 05:18:00.075464 2377 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 28 05:18:00.075683 kubelet[2377]: I1028 05:18:00.075532 2377 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 28 05:18:00.077058 kubelet[2377]: I1028 05:18:00.075931 2377 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 28 05:18:00.077992 kubelet[2377]: E1028 05:18:00.076820 2377 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://164.92.80.11:6443/api/v1/namespaces/default/events\": dial tcp 164.92.80.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4501.0.0-n-a8513f8a3e.18728ff375b4124e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4501.0.0-n-a8513f8a3e,UID:ci-4501.0.0-n-a8513f8a3e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4501.0.0-n-a8513f8a3e,},FirstTimestamp:2025-10-28 05:18:00.069845582 +0000 UTC m=+0.422674493,LastTimestamp:2025-10-28 05:18:00.069845582 +0000 UTC m=+0.422674493,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4501.0.0-n-a8513f8a3e,}" Oct 28 05:18:00.085318 kubelet[2377]: I1028 05:18:00.085133 2377 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 28 05:18:00.085455 kubelet[2377]: I1028 05:18:00.085376 2377 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 28 05:18:00.088251 kubelet[2377]: E1028 05:18:00.088213 2377 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4501.0.0-n-a8513f8a3e\" not found" Oct 28 05:18:00.088382 kubelet[2377]: I1028 05:18:00.088273 2377 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 28 05:18:00.088581 kubelet[2377]: I1028 05:18:00.088468 2377 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 28 05:18:00.088581 kubelet[2377]: I1028 05:18:00.088531 2377 reconciler.go:29] "Reconciler: start to sync state" Oct 28 05:18:00.089005 kubelet[2377]: E1028 05:18:00.088975 2377 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://164.92.80.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 164.92.80.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 28 05:18:00.089331 kubelet[2377]: E1028 05:18:00.089097 2377 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 28 05:18:00.090176 kubelet[2377]: I1028 05:18:00.090076 2377 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 28 05:18:00.091239 kubelet[2377]: I1028 05:18:00.091220 2377 factory.go:223] Registration of the containerd container factory successfully Oct 28 05:18:00.092063 kubelet[2377]: E1028 05:18:00.091476 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.80.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4501.0.0-n-a8513f8a3e?timeout=10s\": dial tcp 164.92.80.11:6443: connect: connection refused" interval="200ms" Oct 28 05:18:00.092063 kubelet[2377]: I1028 05:18:00.091532 2377 factory.go:223] Registration of the systemd container factory successfully Oct 28 05:18:00.112714 kubelet[2377]: I1028 05:18:00.112677 2377 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 28 05:18:00.112714 kubelet[2377]: I1028 05:18:00.112697 2377 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 28 05:18:00.112714 kubelet[2377]: I1028 05:18:00.112716 2377 state_mem.go:36] "Initialized new in-memory state store" Oct 28 05:18:00.114007 kubelet[2377]: I1028 05:18:00.113971 2377 policy_none.go:49] "None policy: Start" Oct 28 05:18:00.114007 kubelet[2377]: I1028 05:18:00.114010 2377 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 28 05:18:00.114007 kubelet[2377]: I1028 05:18:00.114043 2377 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 28 05:18:00.116136 kubelet[2377]: I1028 05:18:00.116100 2377 policy_none.go:47] "Start" Oct 28 05:18:00.121903 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 28 05:18:00.132385 kubelet[2377]: I1028 05:18:00.132247 2377 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 28 05:18:00.137492 kubelet[2377]: I1028 05:18:00.137431 2377 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 28 05:18:00.137492 kubelet[2377]: I1028 05:18:00.137464 2377 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 28 05:18:00.138015 kubelet[2377]: I1028 05:18:00.137713 2377 kubelet.go:2427] "Starting kubelet main sync loop" Oct 28 05:18:00.138015 kubelet[2377]: E1028 05:18:00.137770 2377 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 28 05:18:00.140948 kubelet[2377]: E1028 05:18:00.140901 2377 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://164.92.80.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 164.92.80.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 28 05:18:00.153340 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 28 05:18:00.170202 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 28 05:18:00.172519 kubelet[2377]: E1028 05:18:00.172483 2377 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 28 05:18:00.173239 kubelet[2377]: I1028 05:18:00.173222 2377 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 28 05:18:00.173387 kubelet[2377]: I1028 05:18:00.173356 2377 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 28 05:18:00.174830 kubelet[2377]: I1028 05:18:00.174811 2377 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 28 05:18:00.175902 kubelet[2377]: E1028 05:18:00.175873 2377 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 28 05:18:00.176042 kubelet[2377]: E1028 05:18:00.175931 2377 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4501.0.0-n-a8513f8a3e\" not found" Oct 28 05:18:00.257354 systemd[1]: Created slice kubepods-burstable-pod2c2ed85d93a256d495885089c0bc822a.slice - libcontainer container kubepods-burstable-pod2c2ed85d93a256d495885089c0bc822a.slice. Oct 28 05:18:00.278086 kubelet[2377]: I1028 05:18:00.276389 2377 kubelet_node_status.go:75] "Attempting to register node" node="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:00.279949 kubelet[2377]: E1028 05:18:00.278868 2377 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4501.0.0-n-a8513f8a3e\" not found" node="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:00.280767 kubelet[2377]: E1028 05:18:00.280650 2377 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.92.80.11:6443/api/v1/nodes\": dial tcp 164.92.80.11:6443: connect: connection refused" node="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:00.287591 systemd[1]: Created slice kubepods-burstable-podd0c998a6b2ef8a95d3ab91853abd29ad.slice - libcontainer container kubepods-burstable-podd0c998a6b2ef8a95d3ab91853abd29ad.slice. Oct 28 05:18:00.289151 kubelet[2377]: I1028 05:18:00.289059 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/080ab3ed1683e7c01580637d7bb8fbda-kubeconfig\") pod \"kube-scheduler-ci-4501.0.0-n-a8513f8a3e\" (UID: \"080ab3ed1683e7c01580637d7bb8fbda\") " pod="kube-system/kube-scheduler-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:00.290432 kubelet[2377]: I1028 05:18:00.290400 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c2ed85d93a256d495885089c0bc822a-ca-certs\") pod \"kube-apiserver-ci-4501.0.0-n-a8513f8a3e\" (UID: \"2c2ed85d93a256d495885089c0bc822a\") " pod="kube-system/kube-apiserver-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:00.290671 kubelet[2377]: I1028 05:18:00.290641 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c2ed85d93a256d495885089c0bc822a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4501.0.0-n-a8513f8a3e\" (UID: \"2c2ed85d93a256d495885089c0bc822a\") " pod="kube-system/kube-apiserver-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:00.290795 kubelet[2377]: I1028 05:18:00.290777 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d0c998a6b2ef8a95d3ab91853abd29ad-k8s-certs\") pod \"kube-controller-manager-ci-4501.0.0-n-a8513f8a3e\" (UID: \"d0c998a6b2ef8a95d3ab91853abd29ad\") " pod="kube-system/kube-controller-manager-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:00.290934 kubelet[2377]: I1028 05:18:00.290912 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d0c998a6b2ef8a95d3ab91853abd29ad-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4501.0.0-n-a8513f8a3e\" (UID: \"d0c998a6b2ef8a95d3ab91853abd29ad\") " pod="kube-system/kube-controller-manager-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:00.291271 kubelet[2377]: I1028 05:18:00.291019 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c2ed85d93a256d495885089c0bc822a-k8s-certs\") pod \"kube-apiserver-ci-4501.0.0-n-a8513f8a3e\" (UID: \"2c2ed85d93a256d495885089c0bc822a\") " pod="kube-system/kube-apiserver-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:00.291271 kubelet[2377]: I1028 05:18:00.291142 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d0c998a6b2ef8a95d3ab91853abd29ad-ca-certs\") pod \"kube-controller-manager-ci-4501.0.0-n-a8513f8a3e\" (UID: \"d0c998a6b2ef8a95d3ab91853abd29ad\") " pod="kube-system/kube-controller-manager-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:00.291271 kubelet[2377]: I1028 05:18:00.291178 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d0c998a6b2ef8a95d3ab91853abd29ad-flexvolume-dir\") pod \"kube-controller-manager-ci-4501.0.0-n-a8513f8a3e\" (UID: \"d0c998a6b2ef8a95d3ab91853abd29ad\") " pod="kube-system/kube-controller-manager-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:00.291271 kubelet[2377]: I1028 05:18:00.291207 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0c998a6b2ef8a95d3ab91853abd29ad-kubeconfig\") pod \"kube-controller-manager-ci-4501.0.0-n-a8513f8a3e\" (UID: \"d0c998a6b2ef8a95d3ab91853abd29ad\") " pod="kube-system/kube-controller-manager-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:00.293061 kubelet[2377]: E1028 05:18:00.292988 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.80.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4501.0.0-n-a8513f8a3e?timeout=10s\": dial tcp 164.92.80.11:6443: connect: connection refused" interval="400ms" Oct 28 05:18:00.295016 kubelet[2377]: E1028 05:18:00.294744 2377 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4501.0.0-n-a8513f8a3e\" not found" node="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:00.298496 systemd[1]: Created slice kubepods-burstable-pod080ab3ed1683e7c01580637d7bb8fbda.slice - libcontainer container kubepods-burstable-pod080ab3ed1683e7c01580637d7bb8fbda.slice. Oct 28 05:18:00.300989 kubelet[2377]: E1028 05:18:00.300727 2377 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4501.0.0-n-a8513f8a3e\" not found" node="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:00.482602 kubelet[2377]: I1028 05:18:00.482522 2377 kubelet_node_status.go:75] "Attempting to register node" node="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:00.483134 kubelet[2377]: E1028 05:18:00.483080 2377 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.92.80.11:6443/api/v1/nodes\": dial tcp 164.92.80.11:6443: connect: connection refused" node="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:00.581783 kubelet[2377]: E1028 05:18:00.581620 2377 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:00.583349 containerd[1596]: time="2025-10-28T05:18:00.583284477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4501.0.0-n-a8513f8a3e,Uid:2c2ed85d93a256d495885089c0bc822a,Namespace:kube-system,Attempt:0,}" Oct 28 05:18:00.607398 kubelet[2377]: E1028 05:18:00.606846 2377 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:00.609163 containerd[1596]: time="2025-10-28T05:18:00.608157952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4501.0.0-n-a8513f8a3e,Uid:d0c998a6b2ef8a95d3ab91853abd29ad,Namespace:kube-system,Attempt:0,}" Oct 28 05:18:00.609370 kubelet[2377]: E1028 05:18:00.608733 2377 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:00.610315 containerd[1596]: time="2025-10-28T05:18:00.610264698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4501.0.0-n-a8513f8a3e,Uid:080ab3ed1683e7c01580637d7bb8fbda,Namespace:kube-system,Attempt:0,}" Oct 28 05:18:00.613078 systemd-resolved[1285]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Oct 28 05:18:00.694131 kubelet[2377]: E1028 05:18:00.694014 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.80.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4501.0.0-n-a8513f8a3e?timeout=10s\": dial tcp 164.92.80.11:6443: connect: connection refused" interval="800ms" Oct 28 05:18:00.863297 kubelet[2377]: E1028 05:18:00.863111 2377 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://164.92.80.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4501.0.0-n-a8513f8a3e&limit=500&resourceVersion=0\": dial tcp 164.92.80.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 28 05:18:00.885611 kubelet[2377]: I1028 05:18:00.885383 2377 kubelet_node_status.go:75] "Attempting to register node" node="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:00.886618 kubelet[2377]: E1028 05:18:00.886546 2377 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.92.80.11:6443/api/v1/nodes\": dial tcp 164.92.80.11:6443: connect: connection refused" node="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:01.098082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2396813169.mount: Deactivated successfully. Oct 28 05:18:01.105465 containerd[1596]: time="2025-10-28T05:18:01.104444390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 05:18:01.106267 containerd[1596]: time="2025-10-28T05:18:01.106223564Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 05:18:01.108282 containerd[1596]: time="2025-10-28T05:18:01.108240939Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 28 05:18:01.108795 containerd[1596]: time="2025-10-28T05:18:01.108765673Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 28 05:18:01.110322 containerd[1596]: time="2025-10-28T05:18:01.110267662Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 05:18:01.112166 containerd[1596]: time="2025-10-28T05:18:01.112129440Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 28 05:18:01.112479 containerd[1596]: time="2025-10-28T05:18:01.112447378Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 05:18:01.116112 containerd[1596]: time="2025-10-28T05:18:01.115900314Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 498.569627ms" Oct 28 05:18:01.118019 containerd[1596]: time="2025-10-28T05:18:01.117889682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 05:18:01.119504 containerd[1596]: time="2025-10-28T05:18:01.119440939Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 506.013596ms" Oct 28 05:18:01.126595 kubelet[2377]: E1028 05:18:01.125696 2377 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://164.92.80.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 164.92.80.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 28 05:18:01.133772 containerd[1596]: time="2025-10-28T05:18:01.133654557Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 523.812635ms" Oct 28 05:18:01.271546 containerd[1596]: time="2025-10-28T05:18:01.271461059Z" level=info msg="connecting to shim fd27734055bbd16dca825683f8f9c7d5a8503e511458dfe857d9cbfafc14ce13" address="unix:///run/containerd/s/a5a46f343ea871e9c30501433e71642dcbaf442d1a777902d1e5270745507b42" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:18:01.278842 containerd[1596]: time="2025-10-28T05:18:01.277337095Z" level=info msg="connecting to shim c5e2ce2790c2f356edb330580e6a72d777c2894e543dafa7c710ed9904879074" address="unix:///run/containerd/s/c66bb1447a5063a1fff5dc3979970c71b68fd777dfe9da42df82f5489c8cf5c2" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:18:01.280760 containerd[1596]: time="2025-10-28T05:18:01.280694397Z" level=info msg="connecting to shim 70f138c77baf244b4250c45e76afb34d63a42163a38b73cf99a8e041fbb75257" address="unix:///run/containerd/s/d4ca1810b71d0d06ebf1a1d77f3fddf1fb97c21ee7ba5a7857f3b64c6d73c412" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:18:01.422700 systemd[1]: Started cri-containerd-70f138c77baf244b4250c45e76afb34d63a42163a38b73cf99a8e041fbb75257.scope - libcontainer container 70f138c77baf244b4250c45e76afb34d63a42163a38b73cf99a8e041fbb75257. Oct 28 05:18:01.439451 systemd[1]: Started cri-containerd-c5e2ce2790c2f356edb330580e6a72d777c2894e543dafa7c710ed9904879074.scope - libcontainer container c5e2ce2790c2f356edb330580e6a72d777c2894e543dafa7c710ed9904879074. Oct 28 05:18:01.443268 systemd[1]: Started cri-containerd-fd27734055bbd16dca825683f8f9c7d5a8503e511458dfe857d9cbfafc14ce13.scope - libcontainer container fd27734055bbd16dca825683f8f9c7d5a8503e511458dfe857d9cbfafc14ce13. Oct 28 05:18:01.496456 kubelet[2377]: E1028 05:18:01.496244 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.80.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4501.0.0-n-a8513f8a3e?timeout=10s\": dial tcp 164.92.80.11:6443: connect: connection refused" interval="1.6s" Oct 28 05:18:01.511826 kubelet[2377]: E1028 05:18:01.511750 2377 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://164.92.80.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 164.92.80.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 28 05:18:01.521784 kubelet[2377]: E1028 05:18:01.521731 2377 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://164.92.80.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 164.92.80.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 28 05:18:01.593525 containerd[1596]: time="2025-10-28T05:18:01.593063855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4501.0.0-n-a8513f8a3e,Uid:080ab3ed1683e7c01580637d7bb8fbda,Namespace:kube-system,Attempt:0,} returns sandbox id \"70f138c77baf244b4250c45e76afb34d63a42163a38b73cf99a8e041fbb75257\"" Oct 28 05:18:01.593525 containerd[1596]: time="2025-10-28T05:18:01.593429376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4501.0.0-n-a8513f8a3e,Uid:2c2ed85d93a256d495885089c0bc822a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5e2ce2790c2f356edb330580e6a72d777c2894e543dafa7c710ed9904879074\"" Oct 28 05:18:01.596615 kubelet[2377]: E1028 05:18:01.596458 2377 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:01.596615 kubelet[2377]: E1028 05:18:01.596478 2377 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:01.599099 containerd[1596]: time="2025-10-28T05:18:01.599033412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4501.0.0-n-a8513f8a3e,Uid:d0c998a6b2ef8a95d3ab91853abd29ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd27734055bbd16dca825683f8f9c7d5a8503e511458dfe857d9cbfafc14ce13\"" Oct 28 05:18:01.604227 kubelet[2377]: E1028 05:18:01.604130 2377 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:01.608334 containerd[1596]: time="2025-10-28T05:18:01.608275892Z" level=info msg="CreateContainer within sandbox \"70f138c77baf244b4250c45e76afb34d63a42163a38b73cf99a8e041fbb75257\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 28 05:18:01.610702 containerd[1596]: time="2025-10-28T05:18:01.610547892Z" level=info msg="CreateContainer within sandbox \"c5e2ce2790c2f356edb330580e6a72d777c2894e543dafa7c710ed9904879074\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 28 05:18:01.612440 containerd[1596]: time="2025-10-28T05:18:01.612387703Z" level=info msg="CreateContainer within sandbox \"fd27734055bbd16dca825683f8f9c7d5a8503e511458dfe857d9cbfafc14ce13\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 28 05:18:01.624835 containerd[1596]: time="2025-10-28T05:18:01.624764172Z" level=info msg="Container cac66329bd0e1a0db1f5a81429b6af50403d6911c3a944cadf10f0b58ad65df3: CDI devices from CRI Config.CDIDevices: []" Oct 28 05:18:01.627579 containerd[1596]: time="2025-10-28T05:18:01.627498222Z" level=info msg="Container 927c253a6c9f43c8dd69f988220afa6253eb28e56296fd72c9bfab1950898045: CDI devices from CRI Config.CDIDevices: []" Oct 28 05:18:01.640503 containerd[1596]: time="2025-10-28T05:18:01.640433748Z" level=info msg="Container bb99177e9e87f7227bea118e8789dce0b3a67f025b9f4e174ebd1642be20121d: CDI devices from CRI Config.CDIDevices: []" Oct 28 05:18:01.651924 containerd[1596]: time="2025-10-28T05:18:01.651823943Z" level=info msg="CreateContainer within sandbox \"70f138c77baf244b4250c45e76afb34d63a42163a38b73cf99a8e041fbb75257\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cac66329bd0e1a0db1f5a81429b6af50403d6911c3a944cadf10f0b58ad65df3\"" Oct 28 05:18:01.653157 containerd[1596]: time="2025-10-28T05:18:01.653098124Z" level=info msg="StartContainer for \"cac66329bd0e1a0db1f5a81429b6af50403d6911c3a944cadf10f0b58ad65df3\"" Oct 28 05:18:01.655237 containerd[1596]: time="2025-10-28T05:18:01.655166017Z" level=info msg="connecting to shim cac66329bd0e1a0db1f5a81429b6af50403d6911c3a944cadf10f0b58ad65df3" address="unix:///run/containerd/s/d4ca1810b71d0d06ebf1a1d77f3fddf1fb97c21ee7ba5a7857f3b64c6d73c412" protocol=ttrpc version=3 Oct 28 05:18:01.658724 containerd[1596]: time="2025-10-28T05:18:01.658655119Z" level=info msg="CreateContainer within sandbox \"fd27734055bbd16dca825683f8f9c7d5a8503e511458dfe857d9cbfafc14ce13\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"927c253a6c9f43c8dd69f988220afa6253eb28e56296fd72c9bfab1950898045\"" Oct 28 05:18:01.661111 containerd[1596]: time="2025-10-28T05:18:01.659905366Z" level=info msg="StartContainer for \"927c253a6c9f43c8dd69f988220afa6253eb28e56296fd72c9bfab1950898045\"" Oct 28 05:18:01.661111 containerd[1596]: time="2025-10-28T05:18:01.659946658Z" level=info msg="CreateContainer within sandbox \"c5e2ce2790c2f356edb330580e6a72d777c2894e543dafa7c710ed9904879074\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bb99177e9e87f7227bea118e8789dce0b3a67f025b9f4e174ebd1642be20121d\"" Oct 28 05:18:01.661723 containerd[1596]: time="2025-10-28T05:18:01.661657836Z" level=info msg="connecting to shim 927c253a6c9f43c8dd69f988220afa6253eb28e56296fd72c9bfab1950898045" address="unix:///run/containerd/s/a5a46f343ea871e9c30501433e71642dcbaf442d1a777902d1e5270745507b42" protocol=ttrpc version=3 Oct 28 05:18:01.662255 containerd[1596]: time="2025-10-28T05:18:01.662223443Z" level=info msg="StartContainer for \"bb99177e9e87f7227bea118e8789dce0b3a67f025b9f4e174ebd1642be20121d\"" Oct 28 05:18:01.663753 containerd[1596]: time="2025-10-28T05:18:01.663639933Z" level=info msg="connecting to shim bb99177e9e87f7227bea118e8789dce0b3a67f025b9f4e174ebd1642be20121d" address="unix:///run/containerd/s/c66bb1447a5063a1fff5dc3979970c71b68fd777dfe9da42df82f5489c8cf5c2" protocol=ttrpc version=3 Oct 28 05:18:01.693681 kubelet[2377]: I1028 05:18:01.693384 2377 kubelet_node_status.go:75] "Attempting to register node" node="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:01.694893 kubelet[2377]: E1028 05:18:01.694746 2377 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.92.80.11:6443/api/v1/nodes\": dial tcp 164.92.80.11:6443: connect: connection refused" node="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:01.704447 systemd[1]: Started cri-containerd-cac66329bd0e1a0db1f5a81429b6af50403d6911c3a944cadf10f0b58ad65df3.scope - libcontainer container cac66329bd0e1a0db1f5a81429b6af50403d6911c3a944cadf10f0b58ad65df3. Oct 28 05:18:01.720097 systemd[1]: Started cri-containerd-927c253a6c9f43c8dd69f988220afa6253eb28e56296fd72c9bfab1950898045.scope - libcontainer container 927c253a6c9f43c8dd69f988220afa6253eb28e56296fd72c9bfab1950898045. Oct 28 05:18:01.733409 systemd[1]: Started cri-containerd-bb99177e9e87f7227bea118e8789dce0b3a67f025b9f4e174ebd1642be20121d.scope - libcontainer container bb99177e9e87f7227bea118e8789dce0b3a67f025b9f4e174ebd1642be20121d. Oct 28 05:18:01.862494 containerd[1596]: time="2025-10-28T05:18:01.862357327Z" level=info msg="StartContainer for \"bb99177e9e87f7227bea118e8789dce0b3a67f025b9f4e174ebd1642be20121d\" returns successfully" Oct 28 05:18:01.868700 containerd[1596]: time="2025-10-28T05:18:01.868651349Z" level=info msg="StartContainer for \"cac66329bd0e1a0db1f5a81429b6af50403d6911c3a944cadf10f0b58ad65df3\" returns successfully" Oct 28 05:18:01.883132 containerd[1596]: time="2025-10-28T05:18:01.882962750Z" level=info msg="StartContainer for \"927c253a6c9f43c8dd69f988220afa6253eb28e56296fd72c9bfab1950898045\" returns successfully" Oct 28 05:18:02.072486 kubelet[2377]: E1028 05:18:02.072354 2377 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://164.92.80.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 164.92.80.11:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 28 05:18:02.165755 kubelet[2377]: E1028 05:18:02.165694 2377 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4501.0.0-n-a8513f8a3e\" not found" node="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:02.165929 kubelet[2377]: E1028 05:18:02.165884 2377 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:02.173011 kubelet[2377]: E1028 05:18:02.172930 2377 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4501.0.0-n-a8513f8a3e\" not found" node="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:02.173310 kubelet[2377]: E1028 05:18:02.173184 2377 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:02.179852 kubelet[2377]: E1028 05:18:02.179779 2377 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4501.0.0-n-a8513f8a3e\" not found" node="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:02.180395 kubelet[2377]: E1028 05:18:02.180005 2377 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:03.183293 kubelet[2377]: E1028 05:18:03.181714 2377 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4501.0.0-n-a8513f8a3e\" not found" node="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:03.184181 kubelet[2377]: E1028 05:18:03.183769 2377 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4501.0.0-n-a8513f8a3e\" not found" node="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:03.184181 kubelet[2377]: E1028 05:18:03.184074 2377 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:03.184181 kubelet[2377]: E1028 05:18:03.184109 2377 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:03.296896 kubelet[2377]: I1028 05:18:03.296860 2377 kubelet_node_status.go:75] "Attempting to register node" node="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:04.724739 kubelet[2377]: E1028 05:18:04.724690 2377 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4501.0.0-n-a8513f8a3e\" not found" node="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:04.725272 kubelet[2377]: E1028 05:18:04.724944 2377 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:05.055805 kubelet[2377]: E1028 05:18:05.055672 2377 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4501.0.0-n-a8513f8a3e\" not found" node="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:05.055966 kubelet[2377]: E1028 05:18:05.055854 2377 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:05.209527 kubelet[2377]: E1028 05:18:05.209466 2377 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4501.0.0-n-a8513f8a3e\" not found" node="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:05.270068 kubelet[2377]: E1028 05:18:05.268883 2377 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4501.0.0-n-a8513f8a3e\" not found" node="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:05.270478 kubelet[2377]: E1028 05:18:05.270458 2377 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:05.302377 kubelet[2377]: I1028 05:18:05.302333 2377 kubelet_node_status.go:78] "Successfully registered node" node="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:05.394058 kubelet[2377]: I1028 05:18:05.391864 2377 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:05.408815 kubelet[2377]: E1028 05:18:05.408735 2377 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4501.0.0-n-a8513f8a3e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:05.408815 kubelet[2377]: I1028 05:18:05.408770 2377 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:05.411915 kubelet[2377]: E1028 05:18:05.411856 2377 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4501.0.0-n-a8513f8a3e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:05.411915 kubelet[2377]: I1028 05:18:05.411895 2377 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:05.414840 kubelet[2377]: E1028 05:18:05.414776 2377 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4501.0.0-n-a8513f8a3e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:06.078317 kubelet[2377]: I1028 05:18:06.078276 2377 apiserver.go:52] "Watching apiserver" Oct 28 05:18:06.088879 kubelet[2377]: I1028 05:18:06.088829 2377 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 28 05:18:07.635345 systemd[1]: Reload requested from client PID 2667 ('systemctl') (unit session-7.scope)... Oct 28 05:18:07.635374 systemd[1]: Reloading... Oct 28 05:18:07.774078 zram_generator::config[2723]: No configuration found. Oct 28 05:18:08.009470 systemd[1]: Reloading finished in 373 ms. Oct 28 05:18:08.044164 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 05:18:08.055726 systemd[1]: kubelet.service: Deactivated successfully. Oct 28 05:18:08.055959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 05:18:08.056040 systemd[1]: kubelet.service: Consumed 959ms CPU time, 121M memory peak. Oct 28 05:18:08.058875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 05:18:08.244413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 05:18:08.256531 (kubelet)[2761]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 28 05:18:08.347954 kubelet[2761]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 28 05:18:08.347954 kubelet[2761]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 28 05:18:08.347954 kubelet[2761]: I1028 05:18:08.347331 2761 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 28 05:18:08.360110 kubelet[2761]: I1028 05:18:08.360056 2761 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 28 05:18:08.362052 kubelet[2761]: I1028 05:18:08.360308 2761 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 28 05:18:08.362052 kubelet[2761]: I1028 05:18:08.360356 2761 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 28 05:18:08.362052 kubelet[2761]: I1028 05:18:08.360367 2761 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 28 05:18:08.362052 kubelet[2761]: I1028 05:18:08.360708 2761 server.go:956] "Client rotation is on, will bootstrap in background" Oct 28 05:18:08.362852 kubelet[2761]: I1028 05:18:08.362821 2761 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 28 05:18:08.366352 kubelet[2761]: I1028 05:18:08.366294 2761 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 28 05:18:08.382541 kubelet[2761]: I1028 05:18:08.382478 2761 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 28 05:18:08.387178 kubelet[2761]: I1028 05:18:08.387138 2761 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 28 05:18:08.388490 kubelet[2761]: I1028 05:18:08.388431 2761 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 28 05:18:08.388902 kubelet[2761]: I1028 05:18:08.388648 2761 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4501.0.0-n-a8513f8a3e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 28 05:18:08.389123 kubelet[2761]: I1028 05:18:08.389104 2761 topology_manager.go:138] "Creating topology manager with none policy" Oct 28 05:18:08.389204 kubelet[2761]: I1028 05:18:08.389193 2761 container_manager_linux.go:306] "Creating device plugin manager" Oct 28 05:18:08.389327 kubelet[2761]: I1028 05:18:08.389314 2761 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 28 05:18:08.390572 kubelet[2761]: I1028 05:18:08.390545 2761 state_mem.go:36] "Initialized new in-memory state store" Oct 28 05:18:08.390927 kubelet[2761]: I1028 05:18:08.390908 2761 kubelet.go:475] "Attempting to sync node with API server" Oct 28 05:18:08.391049 kubelet[2761]: I1028 05:18:08.391036 2761 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 28 05:18:08.391144 kubelet[2761]: I1028 05:18:08.391134 2761 kubelet.go:387] "Adding apiserver pod source" Oct 28 05:18:08.391235 kubelet[2761]: I1028 05:18:08.391224 2761 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 28 05:18:08.394096 kubelet[2761]: I1028 05:18:08.394063 2761 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 28 05:18:08.394978 kubelet[2761]: I1028 05:18:08.394943 2761 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 28 05:18:08.395096 kubelet[2761]: I1028 05:18:08.395002 2761 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 28 05:18:08.405859 kubelet[2761]: I1028 05:18:08.405812 2761 server.go:1262] "Started kubelet" Oct 28 05:18:08.418019 kubelet[2761]: I1028 05:18:08.417992 2761 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 28 05:18:08.434143 kubelet[2761]: I1028 05:18:08.434098 2761 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 28 05:18:08.435158 kubelet[2761]: I1028 05:18:08.435136 2761 server.go:310] "Adding debug handlers to kubelet server" Oct 28 05:18:08.437502 kubelet[2761]: I1028 05:18:08.437465 2761 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 28 05:18:08.437832 kubelet[2761]: I1028 05:18:08.437815 2761 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 28 05:18:08.438500 kubelet[2761]: I1028 05:18:08.438447 2761 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 28 05:18:08.439398 kubelet[2761]: I1028 05:18:08.438897 2761 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 28 05:18:08.441761 kubelet[2761]: I1028 05:18:08.441738 2761 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 28 05:18:08.447105 kubelet[2761]: I1028 05:18:08.447055 2761 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 28 05:18:08.447362 kubelet[2761]: I1028 05:18:08.447351 2761 reconciler.go:29] "Reconciler: start to sync state" Oct 28 05:18:08.450896 kubelet[2761]: I1028 05:18:08.450865 2761 factory.go:223] Registration of the systemd container factory successfully Oct 28 05:18:08.451226 kubelet[2761]: I1028 05:18:08.451206 2761 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 28 05:18:08.453592 kubelet[2761]: I1028 05:18:08.453471 2761 factory.go:223] Registration of the containerd container factory successfully Oct 28 05:18:08.453918 kubelet[2761]: I1028 05:18:08.450901 2761 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 28 05:18:08.455997 kubelet[2761]: I1028 05:18:08.455895 2761 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 28 05:18:08.455997 kubelet[2761]: I1028 05:18:08.455928 2761 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 28 05:18:08.455997 kubelet[2761]: I1028 05:18:08.455960 2761 kubelet.go:2427] "Starting kubelet main sync loop" Oct 28 05:18:08.456202 kubelet[2761]: E1028 05:18:08.456040 2761 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 28 05:18:08.467412 kubelet[2761]: E1028 05:18:08.467363 2761 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 28 05:18:08.526007 kubelet[2761]: I1028 05:18:08.525971 2761 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 28 05:18:08.528056 kubelet[2761]: I1028 05:18:08.527684 2761 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 28 05:18:08.528056 kubelet[2761]: I1028 05:18:08.527729 2761 state_mem.go:36] "Initialized new in-memory state store" Oct 28 05:18:08.528056 kubelet[2761]: I1028 05:18:08.527891 2761 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 28 05:18:08.528056 kubelet[2761]: I1028 05:18:08.527905 2761 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 28 05:18:08.528056 kubelet[2761]: I1028 05:18:08.527929 2761 policy_none.go:49] "None policy: Start" Oct 28 05:18:08.528056 kubelet[2761]: I1028 05:18:08.527943 2761 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 28 05:18:08.528056 kubelet[2761]: I1028 05:18:08.527957 2761 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 28 05:18:08.528467 kubelet[2761]: I1028 05:18:08.528447 2761 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 28 05:18:08.528523 kubelet[2761]: I1028 05:18:08.528517 2761 policy_none.go:47] "Start" Oct 28 05:18:08.533731 kubelet[2761]: E1028 05:18:08.533696 2761 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 28 05:18:08.533966 kubelet[2761]: I1028 05:18:08.533949 2761 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 28 05:18:08.534003 kubelet[2761]: I1028 05:18:08.533971 2761 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 28 05:18:08.538683 kubelet[2761]: I1028 05:18:08.538647 2761 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 28 05:18:08.542612 kubelet[2761]: E1028 05:18:08.541764 2761 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 28 05:18:08.558468 kubelet[2761]: I1028 05:18:08.558425 2761 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:08.559005 kubelet[2761]: I1028 05:18:08.558982 2761 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:08.563732 kubelet[2761]: I1028 05:18:08.563599 2761 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:08.570168 kubelet[2761]: I1028 05:18:08.569983 2761 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 28 05:18:08.570520 kubelet[2761]: I1028 05:18:08.570437 2761 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 28 05:18:08.571838 kubelet[2761]: I1028 05:18:08.571724 2761 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 28 05:18:08.644290 kubelet[2761]: I1028 05:18:08.644246 2761 kubelet_node_status.go:75] "Attempting to register node" node="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:08.657813 kubelet[2761]: I1028 05:18:08.657769 2761 kubelet_node_status.go:124] "Node was previously registered" node="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:08.657989 kubelet[2761]: I1028 05:18:08.657865 2761 kubelet_node_status.go:78] "Successfully registered node" node="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:08.749052 kubelet[2761]: I1028 05:18:08.748877 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c2ed85d93a256d495885089c0bc822a-k8s-certs\") pod \"kube-apiserver-ci-4501.0.0-n-a8513f8a3e\" (UID: \"2c2ed85d93a256d495885089c0bc822a\") " pod="kube-system/kube-apiserver-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:08.749052 kubelet[2761]: I1028 05:18:08.748926 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c2ed85d93a256d495885089c0bc822a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4501.0.0-n-a8513f8a3e\" (UID: \"2c2ed85d93a256d495885089c0bc822a\") " pod="kube-system/kube-apiserver-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:08.749052 kubelet[2761]: I1028 05:18:08.748950 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0c998a6b2ef8a95d3ab91853abd29ad-kubeconfig\") pod \"kube-controller-manager-ci-4501.0.0-n-a8513f8a3e\" (UID: \"d0c998a6b2ef8a95d3ab91853abd29ad\") " pod="kube-system/kube-controller-manager-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:08.749052 kubelet[2761]: I1028 05:18:08.748967 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d0c998a6b2ef8a95d3ab91853abd29ad-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4501.0.0-n-a8513f8a3e\" (UID: \"d0c998a6b2ef8a95d3ab91853abd29ad\") " pod="kube-system/kube-controller-manager-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:08.749052 kubelet[2761]: I1028 05:18:08.748984 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d0c998a6b2ef8a95d3ab91853abd29ad-ca-certs\") pod \"kube-controller-manager-ci-4501.0.0-n-a8513f8a3e\" (UID: \"d0c998a6b2ef8a95d3ab91853abd29ad\") " pod="kube-system/kube-controller-manager-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:08.749321 kubelet[2761]: I1028 05:18:08.749004 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d0c998a6b2ef8a95d3ab91853abd29ad-flexvolume-dir\") pod \"kube-controller-manager-ci-4501.0.0-n-a8513f8a3e\" (UID: \"d0c998a6b2ef8a95d3ab91853abd29ad\") " pod="kube-system/kube-controller-manager-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:08.749587 kubelet[2761]: I1028 05:18:08.749019 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d0c998a6b2ef8a95d3ab91853abd29ad-k8s-certs\") pod \"kube-controller-manager-ci-4501.0.0-n-a8513f8a3e\" (UID: \"d0c998a6b2ef8a95d3ab91853abd29ad\") " pod="kube-system/kube-controller-manager-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:08.749587 kubelet[2761]: I1028 05:18:08.749526 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/080ab3ed1683e7c01580637d7bb8fbda-kubeconfig\") pod \"kube-scheduler-ci-4501.0.0-n-a8513f8a3e\" (UID: \"080ab3ed1683e7c01580637d7bb8fbda\") " pod="kube-system/kube-scheduler-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:08.749587 kubelet[2761]: I1028 05:18:08.749547 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c2ed85d93a256d495885089c0bc822a-ca-certs\") pod \"kube-apiserver-ci-4501.0.0-n-a8513f8a3e\" (UID: \"2c2ed85d93a256d495885089c0bc822a\") " pod="kube-system/kube-apiserver-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:08.873066 kubelet[2761]: E1028 05:18:08.872991 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:08.873899 kubelet[2761]: E1028 05:18:08.873731 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:08.879194 kubelet[2761]: E1028 05:18:08.879158 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:09.397866 kubelet[2761]: I1028 05:18:09.397557 2761 apiserver.go:52] "Watching apiserver" Oct 28 05:18:09.450257 kubelet[2761]: I1028 05:18:09.450040 2761 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 28 05:18:09.501740 kubelet[2761]: I1028 05:18:09.501250 2761 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:09.503851 kubelet[2761]: E1028 05:18:09.502471 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:09.504417 kubelet[2761]: E1028 05:18:09.503716 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:09.516828 kubelet[2761]: I1028 05:18:09.516344 2761 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 28 05:18:09.516828 kubelet[2761]: E1028 05:18:09.516426 2761 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4501.0.0-n-a8513f8a3e\" already exists" pod="kube-system/kube-scheduler-ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:09.516828 kubelet[2761]: E1028 05:18:09.516704 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:09.585511 kubelet[2761]: I1028 05:18:09.585404 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4501.0.0-n-a8513f8a3e" podStartSLOduration=1.585383813 podStartE2EDuration="1.585383813s" podCreationTimestamp="2025-10-28 05:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 05:18:09.565978454 +0000 UTC m=+1.300598309" watchObservedRunningTime="2025-10-28 05:18:09.585383813 +0000 UTC m=+1.320003679" Oct 28 05:18:09.599077 kubelet[2761]: I1028 05:18:09.598916 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4501.0.0-n-a8513f8a3e" podStartSLOduration=1.598897273 podStartE2EDuration="1.598897273s" podCreationTimestamp="2025-10-28 05:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 05:18:09.586876634 +0000 UTC m=+1.321496474" watchObservedRunningTime="2025-10-28 05:18:09.598897273 +0000 UTC m=+1.333517090" Oct 28 05:18:09.614740 kubelet[2761]: I1028 05:18:09.614419 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4501.0.0-n-a8513f8a3e" podStartSLOduration=1.614393972 podStartE2EDuration="1.614393972s" podCreationTimestamp="2025-10-28 05:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 05:18:09.60019827 +0000 UTC m=+1.334818108" watchObservedRunningTime="2025-10-28 05:18:09.614393972 +0000 UTC m=+1.349013814" Oct 28 05:18:10.503602 kubelet[2761]: E1028 05:18:10.503175 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:10.504907 kubelet[2761]: E1028 05:18:10.504806 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:11.060216 kubelet[2761]: E1028 05:18:11.059989 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:11.505719 kubelet[2761]: E1028 05:18:11.504939 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:13.367332 systemd-timesyncd[1463]: Contacted time server 45.77.126.122:123 (2.flatcar.pool.ntp.org). Oct 28 05:18:13.367353 systemd-resolved[1285]: Clock change detected. Flushing caches. Oct 28 05:18:13.367421 systemd-timesyncd[1463]: Initial clock synchronization to Tue 2025-10-28 05:18:13.366867 UTC. Oct 28 05:18:14.934968 kubelet[2761]: I1028 05:18:14.934936 2761 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 28 05:18:14.935803 kubelet[2761]: I1028 05:18:14.935413 2761 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 28 05:18:14.935857 containerd[1596]: time="2025-10-28T05:18:14.935238702Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 28 05:18:15.447957 systemd[1]: Created slice kubepods-besteffort-pod59396d10_c9a7_4ed1_9b51_eaf3e2e31ebb.slice - libcontainer container kubepods-besteffort-pod59396d10_c9a7_4ed1_9b51_eaf3e2e31ebb.slice. Oct 28 05:18:15.493309 kubelet[2761]: I1028 05:18:15.493170 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59396d10-c9a7-4ed1-9b51-eaf3e2e31ebb-xtables-lock\") pod \"kube-proxy-jwzh2\" (UID: \"59396d10-c9a7-4ed1-9b51-eaf3e2e31ebb\") " pod="kube-system/kube-proxy-jwzh2" Oct 28 05:18:15.493309 kubelet[2761]: I1028 05:18:15.493233 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59396d10-c9a7-4ed1-9b51-eaf3e2e31ebb-lib-modules\") pod \"kube-proxy-jwzh2\" (UID: \"59396d10-c9a7-4ed1-9b51-eaf3e2e31ebb\") " pod="kube-system/kube-proxy-jwzh2" Oct 28 05:18:15.493309 kubelet[2761]: I1028 05:18:15.493272 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhtz4\" (UniqueName: \"kubernetes.io/projected/59396d10-c9a7-4ed1-9b51-eaf3e2e31ebb-kube-api-access-hhtz4\") pod \"kube-proxy-jwzh2\" (UID: \"59396d10-c9a7-4ed1-9b51-eaf3e2e31ebb\") " pod="kube-system/kube-proxy-jwzh2" Oct 28 05:18:15.493309 kubelet[2761]: I1028 05:18:15.493305 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/59396d10-c9a7-4ed1-9b51-eaf3e2e31ebb-kube-proxy\") pod \"kube-proxy-jwzh2\" (UID: \"59396d10-c9a7-4ed1-9b51-eaf3e2e31ebb\") " pod="kube-system/kube-proxy-jwzh2" Oct 28 05:18:15.758676 kubelet[2761]: E1028 05:18:15.758612 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:15.760455 containerd[1596]: time="2025-10-28T05:18:15.760169926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jwzh2,Uid:59396d10-c9a7-4ed1-9b51-eaf3e2e31ebb,Namespace:kube-system,Attempt:0,}" Oct 28 05:18:15.782192 containerd[1596]: time="2025-10-28T05:18:15.782152461Z" level=info msg="connecting to shim 78be2563aa2906cc67d60335f05303aec81e7dddda4af38fa3063e1037dfa83f" address="unix:///run/containerd/s/0a39ede7c8e279fa43886e7725d6383bb37dac3f583aec032c8c461101595e86" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:18:15.821948 systemd[1]: Started cri-containerd-78be2563aa2906cc67d60335f05303aec81e7dddda4af38fa3063e1037dfa83f.scope - libcontainer container 78be2563aa2906cc67d60335f05303aec81e7dddda4af38fa3063e1037dfa83f. Oct 28 05:18:15.873690 containerd[1596]: time="2025-10-28T05:18:15.873606955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jwzh2,Uid:59396d10-c9a7-4ed1-9b51-eaf3e2e31ebb,Namespace:kube-system,Attempt:0,} returns sandbox id \"78be2563aa2906cc67d60335f05303aec81e7dddda4af38fa3063e1037dfa83f\"" Oct 28 05:18:15.875982 kubelet[2761]: E1028 05:18:15.875943 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:15.882563 containerd[1596]: time="2025-10-28T05:18:15.882511523Z" level=info msg="CreateContainer within sandbox \"78be2563aa2906cc67d60335f05303aec81e7dddda4af38fa3063e1037dfa83f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 28 05:18:15.904680 containerd[1596]: time="2025-10-28T05:18:15.903769040Z" level=info msg="Container f71a1bb9d9684f5815893b8641e5d63535770216efbb99252f3b274dad85e82b: CDI devices from CRI Config.CDIDevices: []" Oct 28 05:18:15.919872 containerd[1596]: time="2025-10-28T05:18:15.919830281Z" level=info msg="CreateContainer within sandbox \"78be2563aa2906cc67d60335f05303aec81e7dddda4af38fa3063e1037dfa83f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f71a1bb9d9684f5815893b8641e5d63535770216efbb99252f3b274dad85e82b\"" Oct 28 05:18:15.922058 containerd[1596]: time="2025-10-28T05:18:15.922028454Z" level=info msg="StartContainer for \"f71a1bb9d9684f5815893b8641e5d63535770216efbb99252f3b274dad85e82b\"" Oct 28 05:18:15.925409 containerd[1596]: time="2025-10-28T05:18:15.925360281Z" level=info msg="connecting to shim f71a1bb9d9684f5815893b8641e5d63535770216efbb99252f3b274dad85e82b" address="unix:///run/containerd/s/0a39ede7c8e279fa43886e7725d6383bb37dac3f583aec032c8c461101595e86" protocol=ttrpc version=3 Oct 28 05:18:15.950909 systemd[1]: Started cri-containerd-f71a1bb9d9684f5815893b8641e5d63535770216efbb99252f3b274dad85e82b.scope - libcontainer container f71a1bb9d9684f5815893b8641e5d63535770216efbb99252f3b274dad85e82b. Oct 28 05:18:16.034974 containerd[1596]: time="2025-10-28T05:18:16.034636050Z" level=info msg="StartContainer for \"f71a1bb9d9684f5815893b8641e5d63535770216efbb99252f3b274dad85e82b\" returns successfully" Oct 28 05:18:16.101243 kubelet[2761]: E1028 05:18:16.100919 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:16.126166 kubelet[2761]: E1028 05:18:16.126128 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:16.126975 kubelet[2761]: E1028 05:18:16.126947 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:16.160980 kubelet[2761]: I1028 05:18:16.160890 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jwzh2" podStartSLOduration=1.16087141 podStartE2EDuration="1.16087141s" podCreationTimestamp="2025-10-28 05:18:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 05:18:16.160776534 +0000 UTC m=+7.289777429" watchObservedRunningTime="2025-10-28 05:18:16.16087141 +0000 UTC m=+7.289872294" Oct 28 05:18:16.218494 kubelet[2761]: E1028 05:18:16.218434 2761 status_manager.go:1018] "Failed to get status for pod" err="pods \"tigera-operator-65cdcdfd6d-n2h25\" is forbidden: User \"system:node:ci-4501.0.0-n-a8513f8a3e\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4501.0.0-n-a8513f8a3e' and this object" podUID="f0e56437-435a-4dfb-b13f-6453c02da327" pod="tigera-operator/tigera-operator-65cdcdfd6d-n2h25" Oct 28 05:18:16.219252 kubelet[2761]: E1028 05:18:16.218579 2761 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ci-4501.0.0-n-a8513f8a3e\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4501.0.0-n-a8513f8a3e' and this object" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kubernetes-services-endpoint\"" type="*v1.ConfigMap" Oct 28 05:18:16.219252 kubelet[2761]: E1028 05:18:16.218816 2761 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4501.0.0-n-a8513f8a3e\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4501.0.0-n-a8513f8a3e' and this object" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Oct 28 05:18:16.223263 systemd[1]: Created slice kubepods-besteffort-podf0e56437_435a_4dfb_b13f_6453c02da327.slice - libcontainer container kubepods-besteffort-podf0e56437_435a_4dfb_b13f_6453c02da327.slice. Oct 28 05:18:16.300144 kubelet[2761]: I1028 05:18:16.299746 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25pvt\" (UniqueName: \"kubernetes.io/projected/f0e56437-435a-4dfb-b13f-6453c02da327-kube-api-access-25pvt\") pod \"tigera-operator-65cdcdfd6d-n2h25\" (UID: \"f0e56437-435a-4dfb-b13f-6453c02da327\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-n2h25" Oct 28 05:18:16.300144 kubelet[2761]: I1028 05:18:16.299793 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f0e56437-435a-4dfb-b13f-6453c02da327-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-n2h25\" (UID: \"f0e56437-435a-4dfb-b13f-6453c02da327\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-n2h25" Oct 28 05:18:17.430513 containerd[1596]: time="2025-10-28T05:18:17.430448456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-n2h25,Uid:f0e56437-435a-4dfb-b13f-6453c02da327,Namespace:tigera-operator,Attempt:0,}" Oct 28 05:18:17.454113 containerd[1596]: time="2025-10-28T05:18:17.454050532Z" level=info msg="connecting to shim d0c032d2d0305f6d968e068b057b910118f433ff27c260390552614b0c726430" address="unix:///run/containerd/s/f93f9274e3c15381b757166cd832928ff0e0e95625daabfa8f734027e85ec7e0" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:18:17.489962 systemd[1]: Started cri-containerd-d0c032d2d0305f6d968e068b057b910118f433ff27c260390552614b0c726430.scope - libcontainer container d0c032d2d0305f6d968e068b057b910118f433ff27c260390552614b0c726430. Oct 28 05:18:17.554081 containerd[1596]: time="2025-10-28T05:18:17.554024063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-n2h25,Uid:f0e56437-435a-4dfb-b13f-6453c02da327,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d0c032d2d0305f6d968e068b057b910118f433ff27c260390552614b0c726430\"" Oct 28 05:18:17.557791 containerd[1596]: time="2025-10-28T05:18:17.557721123Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 28 05:18:18.549616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3804306908.mount: Deactivated successfully. Oct 28 05:18:18.632669 kubelet[2761]: E1028 05:18:18.631876 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:19.137576 kubelet[2761]: E1028 05:18:19.137531 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:19.619059 containerd[1596]: time="2025-10-28T05:18:19.618980837Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:18:19.619993 containerd[1596]: time="2025-10-28T05:18:19.619958737Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 28 05:18:19.621613 containerd[1596]: time="2025-10-28T05:18:19.620423066Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:18:19.622553 containerd[1596]: time="2025-10-28T05:18:19.622502948Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:18:19.623701 containerd[1596]: time="2025-10-28T05:18:19.623668254Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.065891099s" Oct 28 05:18:19.623701 containerd[1596]: time="2025-10-28T05:18:19.623701658Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 28 05:18:19.627438 containerd[1596]: time="2025-10-28T05:18:19.627396672Z" level=info msg="CreateContainer within sandbox \"d0c032d2d0305f6d968e068b057b910118f433ff27c260390552614b0c726430\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 28 05:18:19.635287 containerd[1596]: time="2025-10-28T05:18:19.635238444Z" level=info msg="Container e70c4239dde75e5823982194646fe06165f7b07b19278e2ddad00abcfc30090b: CDI devices from CRI Config.CDIDevices: []" Oct 28 05:18:19.640062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1975580620.mount: Deactivated successfully. Oct 28 05:18:19.642748 containerd[1596]: time="2025-10-28T05:18:19.642703414Z" level=info msg="CreateContainer within sandbox \"d0c032d2d0305f6d968e068b057b910118f433ff27c260390552614b0c726430\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e70c4239dde75e5823982194646fe06165f7b07b19278e2ddad00abcfc30090b\"" Oct 28 05:18:19.645671 containerd[1596]: time="2025-10-28T05:18:19.643343146Z" level=info msg="StartContainer for \"e70c4239dde75e5823982194646fe06165f7b07b19278e2ddad00abcfc30090b\"" Oct 28 05:18:19.645989 containerd[1596]: time="2025-10-28T05:18:19.645954628Z" level=info msg="connecting to shim e70c4239dde75e5823982194646fe06165f7b07b19278e2ddad00abcfc30090b" address="unix:///run/containerd/s/f93f9274e3c15381b757166cd832928ff0e0e95625daabfa8f734027e85ec7e0" protocol=ttrpc version=3 Oct 28 05:18:19.671894 systemd[1]: Started cri-containerd-e70c4239dde75e5823982194646fe06165f7b07b19278e2ddad00abcfc30090b.scope - libcontainer container e70c4239dde75e5823982194646fe06165f7b07b19278e2ddad00abcfc30090b. Oct 28 05:18:19.709912 containerd[1596]: time="2025-10-28T05:18:19.709855687Z" level=info msg="StartContainer for \"e70c4239dde75e5823982194646fe06165f7b07b19278e2ddad00abcfc30090b\" returns successfully" Oct 28 05:18:20.143694 kubelet[2761]: E1028 05:18:20.142524 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:20.185285 kubelet[2761]: I1028 05:18:20.185183 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-n2h25" podStartSLOduration=2.116331058 podStartE2EDuration="4.185140009s" podCreationTimestamp="2025-10-28 05:18:16 +0000 UTC" firstStartedPulling="2025-10-28 05:18:17.555803652 +0000 UTC m=+8.684804515" lastFinishedPulling="2025-10-28 05:18:19.624612602 +0000 UTC m=+10.753613466" observedRunningTime="2025-10-28 05:18:20.184818675 +0000 UTC m=+11.313819554" watchObservedRunningTime="2025-10-28 05:18:20.185140009 +0000 UTC m=+11.314140896" Oct 28 05:18:23.215047 systemd[1]: cri-containerd-e70c4239dde75e5823982194646fe06165f7b07b19278e2ddad00abcfc30090b.scope: Deactivated successfully. Oct 28 05:18:23.235652 containerd[1596]: time="2025-10-28T05:18:23.235476261Z" level=info msg="received exit event container_id:\"e70c4239dde75e5823982194646fe06165f7b07b19278e2ddad00abcfc30090b\" id:\"e70c4239dde75e5823982194646fe06165f7b07b19278e2ddad00abcfc30090b\" pid:3089 exit_status:1 exited_at:{seconds:1761628703 nanos:220231510}" Oct 28 05:18:23.237072 containerd[1596]: time="2025-10-28T05:18:23.236528669Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e70c4239dde75e5823982194646fe06165f7b07b19278e2ddad00abcfc30090b\" id:\"e70c4239dde75e5823982194646fe06165f7b07b19278e2ddad00abcfc30090b\" pid:3089 exit_status:1 exited_at:{seconds:1761628703 nanos:220231510}" Oct 28 05:18:23.279917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e70c4239dde75e5823982194646fe06165f7b07b19278e2ddad00abcfc30090b-rootfs.mount: Deactivated successfully. Oct 28 05:18:23.898987 update_engine[1571]: I20251028 05:18:23.898341 1571 update_attempter.cc:509] Updating boot flags... Oct 28 05:18:24.175810 kubelet[2761]: I1028 05:18:24.175087 2761 scope.go:117] "RemoveContainer" containerID="e70c4239dde75e5823982194646fe06165f7b07b19278e2ddad00abcfc30090b" Oct 28 05:18:24.207457 containerd[1596]: time="2025-10-28T05:18:24.207306929Z" level=info msg="CreateContainer within sandbox \"d0c032d2d0305f6d968e068b057b910118f433ff27c260390552614b0c726430\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Oct 28 05:18:24.230416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2809145273.mount: Deactivated successfully. Oct 28 05:18:24.233668 containerd[1596]: time="2025-10-28T05:18:24.232117298Z" level=info msg="Container 6bfe42e13b3585c78b1d99e23c3c36532ba0b98a0005c9f8cc681df7a1c68bcf: CDI devices from CRI Config.CDIDevices: []" Oct 28 05:18:24.265126 containerd[1596]: time="2025-10-28T05:18:24.263124135Z" level=info msg="CreateContainer within sandbox \"d0c032d2d0305f6d968e068b057b910118f433ff27c260390552614b0c726430\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"6bfe42e13b3585c78b1d99e23c3c36532ba0b98a0005c9f8cc681df7a1c68bcf\"" Oct 28 05:18:24.267921 containerd[1596]: time="2025-10-28T05:18:24.266975539Z" level=info msg="StartContainer for \"6bfe42e13b3585c78b1d99e23c3c36532ba0b98a0005c9f8cc681df7a1c68bcf\"" Oct 28 05:18:24.321575 containerd[1596]: time="2025-10-28T05:18:24.321413257Z" level=info msg="connecting to shim 6bfe42e13b3585c78b1d99e23c3c36532ba0b98a0005c9f8cc681df7a1c68bcf" address="unix:///run/containerd/s/f93f9274e3c15381b757166cd832928ff0e0e95625daabfa8f734027e85ec7e0" protocol=ttrpc version=3 Oct 28 05:18:24.423799 systemd[1]: Started cri-containerd-6bfe42e13b3585c78b1d99e23c3c36532ba0b98a0005c9f8cc681df7a1c68bcf.scope - libcontainer container 6bfe42e13b3585c78b1d99e23c3c36532ba0b98a0005c9f8cc681df7a1c68bcf. Oct 28 05:18:24.499675 containerd[1596]: time="2025-10-28T05:18:24.497831880Z" level=info msg="StartContainer for \"6bfe42e13b3585c78b1d99e23c3c36532ba0b98a0005c9f8cc681df7a1c68bcf\" returns successfully" Oct 28 05:18:27.010679 sudo[1827]: pam_unix(sudo:session): session closed for user root Oct 28 05:18:27.015682 sshd[1826]: Connection closed by 139.178.89.65 port 36888 Oct 28 05:18:27.015946 sshd-session[1823]: pam_unix(sshd:session): session closed for user core Oct 28 05:18:27.020243 systemd[1]: sshd@6-164.92.80.11:22-139.178.89.65:36888.service: Deactivated successfully. Oct 28 05:18:27.023795 systemd[1]: session-7.scope: Deactivated successfully. Oct 28 05:18:27.024086 systemd[1]: session-7.scope: Consumed 7.542s CPU time, 167.7M memory peak. Oct 28 05:18:27.025912 systemd-logind[1570]: Session 7 logged out. Waiting for processes to exit. Oct 28 05:18:27.028483 systemd-logind[1570]: Removed session 7. Oct 28 05:18:35.833919 kubelet[2761]: I1028 05:18:35.833573 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be3c0fdb-8b0f-4260-9fdd-d9a9fc81849e-tigera-ca-bundle\") pod \"calico-typha-7bd4c69547-vd4hl\" (UID: \"be3c0fdb-8b0f-4260-9fdd-d9a9fc81849e\") " pod="calico-system/calico-typha-7bd4c69547-vd4hl" Oct 28 05:18:35.836096 kubelet[2761]: I1028 05:18:35.833635 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/be3c0fdb-8b0f-4260-9fdd-d9a9fc81849e-typha-certs\") pod \"calico-typha-7bd4c69547-vd4hl\" (UID: \"be3c0fdb-8b0f-4260-9fdd-d9a9fc81849e\") " pod="calico-system/calico-typha-7bd4c69547-vd4hl" Oct 28 05:18:35.836096 kubelet[2761]: I1028 05:18:35.835784 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n54xr\" (UniqueName: \"kubernetes.io/projected/be3c0fdb-8b0f-4260-9fdd-d9a9fc81849e-kube-api-access-n54xr\") pod \"calico-typha-7bd4c69547-vd4hl\" (UID: \"be3c0fdb-8b0f-4260-9fdd-d9a9fc81849e\") " pod="calico-system/calico-typha-7bd4c69547-vd4hl" Oct 28 05:18:35.847484 systemd[1]: Created slice kubepods-besteffort-podbe3c0fdb_8b0f_4260_9fdd_d9a9fc81849e.slice - libcontainer container kubepods-besteffort-podbe3c0fdb_8b0f_4260_9fdd_d9a9fc81849e.slice. Oct 28 05:18:36.074634 systemd[1]: Created slice kubepods-besteffort-poddac615cd_1601_4f88_ad3f_a8e0e7ce9547.slice - libcontainer container kubepods-besteffort-poddac615cd_1601_4f88_ad3f_a8e0e7ce9547.slice. Oct 28 05:18:36.137781 kubelet[2761]: I1028 05:18:36.137035 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dac615cd-1601-4f88-ad3f-a8e0e7ce9547-xtables-lock\") pod \"calico-node-95tdc\" (UID: \"dac615cd-1601-4f88-ad3f-a8e0e7ce9547\") " pod="calico-system/calico-node-95tdc" Oct 28 05:18:36.137781 kubelet[2761]: I1028 05:18:36.137125 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dac615cd-1601-4f88-ad3f-a8e0e7ce9547-lib-modules\") pod \"calico-node-95tdc\" (UID: \"dac615cd-1601-4f88-ad3f-a8e0e7ce9547\") " pod="calico-system/calico-node-95tdc" Oct 28 05:18:36.137781 kubelet[2761]: I1028 05:18:36.137152 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dac615cd-1601-4f88-ad3f-a8e0e7ce9547-var-run-calico\") pod \"calico-node-95tdc\" (UID: \"dac615cd-1601-4f88-ad3f-a8e0e7ce9547\") " pod="calico-system/calico-node-95tdc" Oct 28 05:18:36.137781 kubelet[2761]: I1028 05:18:36.137181 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dac615cd-1601-4f88-ad3f-a8e0e7ce9547-cni-bin-dir\") pod \"calico-node-95tdc\" (UID: \"dac615cd-1601-4f88-ad3f-a8e0e7ce9547\") " pod="calico-system/calico-node-95tdc" Oct 28 05:18:36.137781 kubelet[2761]: I1028 05:18:36.137209 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dac615cd-1601-4f88-ad3f-a8e0e7ce9547-cni-net-dir\") pod \"calico-node-95tdc\" (UID: \"dac615cd-1601-4f88-ad3f-a8e0e7ce9547\") " pod="calico-system/calico-node-95tdc" Oct 28 05:18:36.138247 kubelet[2761]: I1028 05:18:36.137241 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dac615cd-1601-4f88-ad3f-a8e0e7ce9547-tigera-ca-bundle\") pod \"calico-node-95tdc\" (UID: \"dac615cd-1601-4f88-ad3f-a8e0e7ce9547\") " pod="calico-system/calico-node-95tdc" Oct 28 05:18:36.138247 kubelet[2761]: I1028 05:18:36.137266 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dac615cd-1601-4f88-ad3f-a8e0e7ce9547-var-lib-calico\") pod \"calico-node-95tdc\" (UID: \"dac615cd-1601-4f88-ad3f-a8e0e7ce9547\") " pod="calico-system/calico-node-95tdc" Oct 28 05:18:36.138247 kubelet[2761]: I1028 05:18:36.137295 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dac615cd-1601-4f88-ad3f-a8e0e7ce9547-flexvol-driver-host\") pod \"calico-node-95tdc\" (UID: \"dac615cd-1601-4f88-ad3f-a8e0e7ce9547\") " pod="calico-system/calico-node-95tdc" Oct 28 05:18:36.138247 kubelet[2761]: I1028 05:18:36.137324 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dac615cd-1601-4f88-ad3f-a8e0e7ce9547-cni-log-dir\") pod \"calico-node-95tdc\" (UID: \"dac615cd-1601-4f88-ad3f-a8e0e7ce9547\") " pod="calico-system/calico-node-95tdc" Oct 28 05:18:36.138247 kubelet[2761]: I1028 05:18:36.137347 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dac615cd-1601-4f88-ad3f-a8e0e7ce9547-policysync\") pod \"calico-node-95tdc\" (UID: \"dac615cd-1601-4f88-ad3f-a8e0e7ce9547\") " pod="calico-system/calico-node-95tdc" Oct 28 05:18:36.139204 kubelet[2761]: I1028 05:18:36.137374 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjlft\" (UniqueName: \"kubernetes.io/projected/dac615cd-1601-4f88-ad3f-a8e0e7ce9547-kube-api-access-hjlft\") pod \"calico-node-95tdc\" (UID: \"dac615cd-1601-4f88-ad3f-a8e0e7ce9547\") " pod="calico-system/calico-node-95tdc" Oct 28 05:18:36.139204 kubelet[2761]: I1028 05:18:36.137406 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dac615cd-1601-4f88-ad3f-a8e0e7ce9547-node-certs\") pod \"calico-node-95tdc\" (UID: \"dac615cd-1601-4f88-ad3f-a8e0e7ce9547\") " pod="calico-system/calico-node-95tdc" Oct 28 05:18:36.163787 kubelet[2761]: E1028 05:18:36.163021 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:36.165782 containerd[1596]: time="2025-10-28T05:18:36.165565345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bd4c69547-vd4hl,Uid:be3c0fdb-8b0f-4260-9fdd-d9a9fc81849e,Namespace:calico-system,Attempt:0,}" Oct 28 05:18:36.206502 containerd[1596]: time="2025-10-28T05:18:36.206442424Z" level=info msg="connecting to shim 0b315b0e9e38c4de20cce1bbe732fe797f5ca775423d577319db86fa483f02c3" address="unix:///run/containerd/s/c144e882e5edb2f324d0d1cba5b8adc23ed9ffc4466ae540775f2b441f0f53eb" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:18:36.243991 kubelet[2761]: E1028 05:18:36.243724 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.243991 kubelet[2761]: W1028 05:18:36.243877 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.244575 kubelet[2761]: E1028 05:18:36.244306 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.249861 kubelet[2761]: E1028 05:18:36.249789 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.249861 kubelet[2761]: W1028 05:18:36.249825 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.249861 kubelet[2761]: E1028 05:18:36.249856 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.263535 kubelet[2761]: E1028 05:18:36.263402 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.264232 kubelet[2761]: W1028 05:18:36.263802 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.264232 kubelet[2761]: E1028 05:18:36.263842 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.277083 kubelet[2761]: E1028 05:18:36.276980 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.277083 kubelet[2761]: W1028 05:18:36.277011 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.277083 kubelet[2761]: E1028 05:18:36.277038 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.287343 systemd[1]: Started cri-containerd-0b315b0e9e38c4de20cce1bbe732fe797f5ca775423d577319db86fa483f02c3.scope - libcontainer container 0b315b0e9e38c4de20cce1bbe732fe797f5ca775423d577319db86fa483f02c3. Oct 28 05:18:36.295215 kubelet[2761]: E1028 05:18:36.295171 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pj4dv" podUID="65eac80b-7114-46da-934f-c797b4afa603" Oct 28 05:18:36.338388 kubelet[2761]: E1028 05:18:36.338348 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.338388 kubelet[2761]: W1028 05:18:36.338380 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.338605 kubelet[2761]: E1028 05:18:36.338412 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.339281 kubelet[2761]: E1028 05:18:36.339124 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.339281 kubelet[2761]: W1028 05:18:36.339140 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.339281 kubelet[2761]: E1028 05:18:36.339157 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.339806 kubelet[2761]: E1028 05:18:36.339787 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.339806 kubelet[2761]: W1028 05:18:36.339803 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.339806 kubelet[2761]: E1028 05:18:36.339819 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.340907 kubelet[2761]: E1028 05:18:36.340884 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.340907 kubelet[2761]: W1028 05:18:36.340903 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.340907 kubelet[2761]: E1028 05:18:36.340918 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.341746 kubelet[2761]: E1028 05:18:36.341717 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.341746 kubelet[2761]: W1028 05:18:36.341733 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.341746 kubelet[2761]: E1028 05:18:36.341746 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.342102 kubelet[2761]: E1028 05:18:36.342086 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.342102 kubelet[2761]: W1028 05:18:36.342100 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.342198 kubelet[2761]: E1028 05:18:36.342111 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.342977 kubelet[2761]: E1028 05:18:36.342958 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.342977 kubelet[2761]: W1028 05:18:36.342973 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.343087 kubelet[2761]: E1028 05:18:36.342984 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.343581 kubelet[2761]: E1028 05:18:36.343544 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.343581 kubelet[2761]: W1028 05:18:36.343559 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.343581 kubelet[2761]: E1028 05:18:36.343571 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.345163 kubelet[2761]: E1028 05:18:36.344931 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.345163 kubelet[2761]: W1028 05:18:36.345159 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.345303 kubelet[2761]: E1028 05:18:36.345175 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.346223 kubelet[2761]: E1028 05:18:36.346161 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.346223 kubelet[2761]: W1028 05:18:36.346177 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.346223 kubelet[2761]: E1028 05:18:36.346188 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.348438 kubelet[2761]: E1028 05:18:36.348407 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.348438 kubelet[2761]: W1028 05:18:36.348428 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.348553 kubelet[2761]: E1028 05:18:36.348446 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.349505 kubelet[2761]: E1028 05:18:36.349477 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.349505 kubelet[2761]: W1028 05:18:36.349494 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.349505 kubelet[2761]: E1028 05:18:36.349509 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.350678 kubelet[2761]: E1028 05:18:36.350239 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.350678 kubelet[2761]: W1028 05:18:36.350257 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.350678 kubelet[2761]: E1028 05:18:36.350270 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.350929 kubelet[2761]: E1028 05:18:36.350902 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.350929 kubelet[2761]: W1028 05:18:36.350922 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.351006 kubelet[2761]: E1028 05:18:36.350935 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.351987 kubelet[2761]: E1028 05:18:36.351963 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.351987 kubelet[2761]: W1028 05:18:36.351983 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.352082 kubelet[2761]: E1028 05:18:36.351997 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.352196 kubelet[2761]: E1028 05:18:36.352182 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.352196 kubelet[2761]: W1028 05:18:36.352193 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.352281 kubelet[2761]: E1028 05:18:36.352201 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.352695 kubelet[2761]: E1028 05:18:36.352677 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.352695 kubelet[2761]: W1028 05:18:36.352691 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.352863 kubelet[2761]: E1028 05:18:36.352734 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.353605 kubelet[2761]: E1028 05:18:36.353579 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.353605 kubelet[2761]: W1028 05:18:36.353598 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.353725 kubelet[2761]: E1028 05:18:36.353614 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.354862 kubelet[2761]: E1028 05:18:36.354839 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.354862 kubelet[2761]: W1028 05:18:36.354860 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.354973 kubelet[2761]: E1028 05:18:36.354884 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.355131 kubelet[2761]: E1028 05:18:36.355101 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.355131 kubelet[2761]: W1028 05:18:36.355112 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.355214 kubelet[2761]: E1028 05:18:36.355133 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.357094 kubelet[2761]: E1028 05:18:36.357001 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.357094 kubelet[2761]: W1028 05:18:36.357029 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.357094 kubelet[2761]: E1028 05:18:36.357052 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.357094 kubelet[2761]: I1028 05:18:36.357091 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/65eac80b-7114-46da-934f-c797b4afa603-registration-dir\") pod \"csi-node-driver-pj4dv\" (UID: \"65eac80b-7114-46da-934f-c797b4afa603\") " pod="calico-system/csi-node-driver-pj4dv" Oct 28 05:18:36.357419 kubelet[2761]: E1028 05:18:36.357319 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.357419 kubelet[2761]: W1028 05:18:36.357329 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.357419 kubelet[2761]: E1028 05:18:36.357340 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.357419 kubelet[2761]: I1028 05:18:36.357355 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65eac80b-7114-46da-934f-c797b4afa603-kubelet-dir\") pod \"csi-node-driver-pj4dv\" (UID: \"65eac80b-7114-46da-934f-c797b4afa603\") " pod="calico-system/csi-node-driver-pj4dv" Oct 28 05:18:36.357760 kubelet[2761]: E1028 05:18:36.357533 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.357760 kubelet[2761]: W1028 05:18:36.357542 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.357760 kubelet[2761]: E1028 05:18:36.357552 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.357760 kubelet[2761]: I1028 05:18:36.357572 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skp7l\" (UniqueName: \"kubernetes.io/projected/65eac80b-7114-46da-934f-c797b4afa603-kube-api-access-skp7l\") pod \"csi-node-driver-pj4dv\" (UID: \"65eac80b-7114-46da-934f-c797b4afa603\") " pod="calico-system/csi-node-driver-pj4dv" Oct 28 05:18:36.358469 kubelet[2761]: E1028 05:18:36.358448 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.358469 kubelet[2761]: W1028 05:18:36.358467 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.358581 kubelet[2761]: E1028 05:18:36.358480 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.358581 kubelet[2761]: I1028 05:18:36.358509 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/65eac80b-7114-46da-934f-c797b4afa603-varrun\") pod \"csi-node-driver-pj4dv\" (UID: \"65eac80b-7114-46da-934f-c797b4afa603\") " pod="calico-system/csi-node-driver-pj4dv" Oct 28 05:18:36.359425 kubelet[2761]: E1028 05:18:36.359371 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.359425 kubelet[2761]: W1028 05:18:36.359390 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.359425 kubelet[2761]: E1028 05:18:36.359406 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.360402 kubelet[2761]: E1028 05:18:36.359671 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.360402 kubelet[2761]: W1028 05:18:36.359684 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.360402 kubelet[2761]: E1028 05:18:36.359700 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.360513 kubelet[2761]: E1028 05:18:36.360448 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.360513 kubelet[2761]: W1028 05:18:36.360460 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.360513 kubelet[2761]: E1028 05:18:36.360475 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.360746 kubelet[2761]: I1028 05:18:36.360694 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/65eac80b-7114-46da-934f-c797b4afa603-socket-dir\") pod \"csi-node-driver-pj4dv\" (UID: \"65eac80b-7114-46da-934f-c797b4afa603\") " pod="calico-system/csi-node-driver-pj4dv" Oct 28 05:18:36.360928 kubelet[2761]: E1028 05:18:36.360910 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.360928 kubelet[2761]: W1028 05:18:36.360925 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.361018 kubelet[2761]: E1028 05:18:36.360938 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.361771 kubelet[2761]: E1028 05:18:36.361751 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.361771 kubelet[2761]: W1028 05:18:36.361767 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.361880 kubelet[2761]: E1028 05:18:36.361780 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.362303 kubelet[2761]: E1028 05:18:36.362282 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.362361 kubelet[2761]: W1028 05:18:36.362302 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.362361 kubelet[2761]: E1028 05:18:36.362317 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.362983 kubelet[2761]: E1028 05:18:36.362966 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.362983 kubelet[2761]: W1028 05:18:36.362980 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.363082 kubelet[2761]: E1028 05:18:36.362992 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.363530 kubelet[2761]: E1028 05:18:36.363513 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.363530 kubelet[2761]: W1028 05:18:36.363527 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.363619 kubelet[2761]: E1028 05:18:36.363539 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.363992 kubelet[2761]: E1028 05:18:36.363972 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.363992 kubelet[2761]: W1028 05:18:36.363988 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.364085 kubelet[2761]: E1028 05:18:36.364000 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.364531 kubelet[2761]: E1028 05:18:36.364510 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.364531 kubelet[2761]: W1028 05:18:36.364526 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.364630 kubelet[2761]: E1028 05:18:36.364539 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.364889 kubelet[2761]: E1028 05:18:36.364873 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.364889 kubelet[2761]: W1028 05:18:36.364886 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.364980 kubelet[2761]: E1028 05:18:36.364900 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.383839 kubelet[2761]: E1028 05:18:36.382007 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:36.384493 containerd[1596]: time="2025-10-28T05:18:36.384451893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-95tdc,Uid:dac615cd-1601-4f88-ad3f-a8e0e7ce9547,Namespace:calico-system,Attempt:0,}" Oct 28 05:18:36.410381 containerd[1596]: time="2025-10-28T05:18:36.410133313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bd4c69547-vd4hl,Uid:be3c0fdb-8b0f-4260-9fdd-d9a9fc81849e,Namespace:calico-system,Attempt:0,} returns sandbox id \"0b315b0e9e38c4de20cce1bbe732fe797f5ca775423d577319db86fa483f02c3\"" Oct 28 05:18:36.424459 kubelet[2761]: E1028 05:18:36.424300 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:36.431159 containerd[1596]: time="2025-10-28T05:18:36.430863767Z" level=info msg="connecting to shim c6a77203bf1fa8d4c49c75e864b78f704c15f34a297c17ce31d65b4f4e7514f4" address="unix:///run/containerd/s/2a87c113b2acd5a51d346f38c17a8e4084d8479e5ad7eeacc8a1e8555dbb38d6" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:18:36.442298 containerd[1596]: time="2025-10-28T05:18:36.442177417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 28 05:18:36.466848 kubelet[2761]: E1028 05:18:36.466446 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.466848 kubelet[2761]: W1028 05:18:36.466479 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.466848 kubelet[2761]: E1028 05:18:36.466508 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.468637 kubelet[2761]: E1028 05:18:36.468581 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.468637 kubelet[2761]: W1028 05:18:36.468621 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.469900 kubelet[2761]: E1028 05:18:36.468687 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.470205 kubelet[2761]: E1028 05:18:36.470156 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.470205 kubelet[2761]: W1028 05:18:36.470194 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.470312 kubelet[2761]: E1028 05:18:36.470225 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.471854 kubelet[2761]: E1028 05:18:36.471816 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.471854 kubelet[2761]: W1028 05:18:36.471852 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.472001 kubelet[2761]: E1028 05:18:36.471902 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.473893 kubelet[2761]: E1028 05:18:36.472764 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.473893 kubelet[2761]: W1028 05:18:36.472785 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.473893 kubelet[2761]: E1028 05:18:36.472809 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.473893 kubelet[2761]: E1028 05:18:36.473349 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.473893 kubelet[2761]: W1028 05:18:36.473365 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.473893 kubelet[2761]: E1028 05:18:36.473385 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.473893 kubelet[2761]: E1028 05:18:36.473704 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.473893 kubelet[2761]: W1028 05:18:36.473715 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.473893 kubelet[2761]: E1028 05:18:36.473726 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.474704 kubelet[2761]: E1028 05:18:36.474172 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.474704 kubelet[2761]: W1028 05:18:36.474183 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.474704 kubelet[2761]: E1028 05:18:36.474195 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.474979 kubelet[2761]: E1028 05:18:36.474938 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.474979 kubelet[2761]: W1028 05:18:36.474953 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.474979 kubelet[2761]: E1028 05:18:36.474969 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.475664 kubelet[2761]: E1028 05:18:36.475184 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.475664 kubelet[2761]: W1028 05:18:36.475195 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.475664 kubelet[2761]: E1028 05:18:36.475208 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.475664 kubelet[2761]: E1028 05:18:36.475500 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.475664 kubelet[2761]: W1028 05:18:36.475509 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.476893 kubelet[2761]: E1028 05:18:36.475519 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.476893 kubelet[2761]: E1028 05:18:36.475998 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.476893 kubelet[2761]: W1028 05:18:36.476008 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.476893 kubelet[2761]: E1028 05:18:36.476019 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.476893 kubelet[2761]: E1028 05:18:36.476176 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.476893 kubelet[2761]: W1028 05:18:36.476183 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.476893 kubelet[2761]: E1028 05:18:36.476191 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.476893 kubelet[2761]: E1028 05:18:36.476332 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.476893 kubelet[2761]: W1028 05:18:36.476339 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.476893 kubelet[2761]: E1028 05:18:36.476349 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.477271 kubelet[2761]: E1028 05:18:36.476810 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.477271 kubelet[2761]: W1028 05:18:36.476839 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.477271 kubelet[2761]: E1028 05:18:36.476854 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.477271 kubelet[2761]: E1028 05:18:36.477072 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.477271 kubelet[2761]: W1028 05:18:36.477084 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.477271 kubelet[2761]: E1028 05:18:36.477097 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.477480 kubelet[2761]: E1028 05:18:36.477279 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.477480 kubelet[2761]: W1028 05:18:36.477290 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.477480 kubelet[2761]: E1028 05:18:36.477323 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.478720 kubelet[2761]: E1028 05:18:36.477548 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.478720 kubelet[2761]: W1028 05:18:36.477559 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.478720 kubelet[2761]: E1028 05:18:36.477572 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.478720 kubelet[2761]: E1028 05:18:36.477904 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.478720 kubelet[2761]: W1028 05:18:36.477918 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.478720 kubelet[2761]: E1028 05:18:36.477932 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.478720 kubelet[2761]: E1028 05:18:36.478250 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.478720 kubelet[2761]: W1028 05:18:36.478263 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.478720 kubelet[2761]: E1028 05:18:36.478279 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.478720 kubelet[2761]: E1028 05:18:36.478518 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.480328 kubelet[2761]: W1028 05:18:36.478532 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.480328 kubelet[2761]: E1028 05:18:36.478544 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.480328 kubelet[2761]: E1028 05:18:36.480065 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.480328 kubelet[2761]: W1028 05:18:36.480078 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.480328 kubelet[2761]: E1028 05:18:36.480093 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.482110 kubelet[2761]: E1028 05:18:36.480345 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.482110 kubelet[2761]: W1028 05:18:36.480358 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.482110 kubelet[2761]: E1028 05:18:36.480373 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.482110 kubelet[2761]: E1028 05:18:36.480733 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.482110 kubelet[2761]: W1028 05:18:36.480745 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.482110 kubelet[2761]: E1028 05:18:36.480756 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.483897 kubelet[2761]: E1028 05:18:36.483865 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.483897 kubelet[2761]: W1028 05:18:36.483888 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.484035 kubelet[2761]: E1028 05:18:36.483908 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.487942 systemd[1]: Started cri-containerd-c6a77203bf1fa8d4c49c75e864b78f704c15f34a297c17ce31d65b4f4e7514f4.scope - libcontainer container c6a77203bf1fa8d4c49c75e864b78f704c15f34a297c17ce31d65b4f4e7514f4. Oct 28 05:18:36.518981 kubelet[2761]: E1028 05:18:36.518942 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:36.518981 kubelet[2761]: W1028 05:18:36.518971 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:36.518981 kubelet[2761]: E1028 05:18:36.518997 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:36.557092 containerd[1596]: time="2025-10-28T05:18:36.557039610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-95tdc,Uid:dac615cd-1601-4f88-ad3f-a8e0e7ce9547,Namespace:calico-system,Attempt:0,} returns sandbox id \"c6a77203bf1fa8d4c49c75e864b78f704c15f34a297c17ce31d65b4f4e7514f4\"" Oct 28 05:18:36.558431 kubelet[2761]: E1028 05:18:36.558398 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:37.738573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount597192788.mount: Deactivated successfully. Oct 28 05:18:38.080341 kubelet[2761]: E1028 05:18:38.080171 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pj4dv" podUID="65eac80b-7114-46da-934f-c797b4afa603" Oct 28 05:18:38.999443 containerd[1596]: time="2025-10-28T05:18:38.999381983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:18:39.000568 containerd[1596]: time="2025-10-28T05:18:39.000282308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 28 05:18:39.001779 containerd[1596]: time="2025-10-28T05:18:39.001323730Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:18:39.003600 containerd[1596]: time="2025-10-28T05:18:39.003564413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:18:39.005045 containerd[1596]: time="2025-10-28T05:18:39.004485593Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.562185003s" Oct 28 05:18:39.005045 containerd[1596]: time="2025-10-28T05:18:39.004565092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 28 05:18:39.007948 containerd[1596]: time="2025-10-28T05:18:39.007908352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 28 05:18:39.043064 containerd[1596]: time="2025-10-28T05:18:39.043014613Z" level=info msg="CreateContainer within sandbox \"0b315b0e9e38c4de20cce1bbe732fe797f5ca775423d577319db86fa483f02c3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 28 05:18:39.069594 containerd[1596]: time="2025-10-28T05:18:39.068773582Z" level=info msg="Container 5282c5baf0652d26facbf30b4a7fe94000ff20429723b3ed84fb4dc4f38b94e2: CDI devices from CRI Config.CDIDevices: []" Oct 28 05:18:39.078225 containerd[1596]: time="2025-10-28T05:18:39.078177646Z" level=info msg="CreateContainer within sandbox \"0b315b0e9e38c4de20cce1bbe732fe797f5ca775423d577319db86fa483f02c3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5282c5baf0652d26facbf30b4a7fe94000ff20429723b3ed84fb4dc4f38b94e2\"" Oct 28 05:18:39.079334 containerd[1596]: time="2025-10-28T05:18:39.079305578Z" level=info msg="StartContainer for \"5282c5baf0652d26facbf30b4a7fe94000ff20429723b3ed84fb4dc4f38b94e2\"" Oct 28 05:18:39.081116 containerd[1596]: time="2025-10-28T05:18:39.080999035Z" level=info msg="connecting to shim 5282c5baf0652d26facbf30b4a7fe94000ff20429723b3ed84fb4dc4f38b94e2" address="unix:///run/containerd/s/c144e882e5edb2f324d0d1cba5b8adc23ed9ffc4466ae540775f2b441f0f53eb" protocol=ttrpc version=3 Oct 28 05:18:39.118970 systemd[1]: Started cri-containerd-5282c5baf0652d26facbf30b4a7fe94000ff20429723b3ed84fb4dc4f38b94e2.scope - libcontainer container 5282c5baf0652d26facbf30b4a7fe94000ff20429723b3ed84fb4dc4f38b94e2. Oct 28 05:18:39.196601 containerd[1596]: time="2025-10-28T05:18:39.196386925Z" level=info msg="StartContainer for \"5282c5baf0652d26facbf30b4a7fe94000ff20429723b3ed84fb4dc4f38b94e2\" returns successfully" Oct 28 05:18:39.236732 kubelet[2761]: E1028 05:18:39.236077 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:39.275099 kubelet[2761]: E1028 05:18:39.274947 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.275289 kubelet[2761]: W1028 05:18:39.275258 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.275390 kubelet[2761]: E1028 05:18:39.275367 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.275720 kubelet[2761]: E1028 05:18:39.275701 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.275898 kubelet[2761]: W1028 05:18:39.275880 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.276038 kubelet[2761]: E1028 05:18:39.276021 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.276537 kubelet[2761]: E1028 05:18:39.276507 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.276758 kubelet[2761]: W1028 05:18:39.276740 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.276920 kubelet[2761]: E1028 05:18:39.276902 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.277402 kubelet[2761]: E1028 05:18:39.277373 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.277677 kubelet[2761]: W1028 05:18:39.277561 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.277677 kubelet[2761]: E1028 05:18:39.277597 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.278226 kubelet[2761]: E1028 05:18:39.278143 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.278226 kubelet[2761]: W1028 05:18:39.278160 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.278226 kubelet[2761]: E1028 05:18:39.278175 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.279040 kubelet[2761]: E1028 05:18:39.278841 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.279040 kubelet[2761]: W1028 05:18:39.278863 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.279040 kubelet[2761]: E1028 05:18:39.278881 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.279809 kubelet[2761]: E1028 05:18:39.279789 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.279953 kubelet[2761]: W1028 05:18:39.279934 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.280045 kubelet[2761]: E1028 05:18:39.280033 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.280382 kubelet[2761]: E1028 05:18:39.280257 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.280382 kubelet[2761]: W1028 05:18:39.280267 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.280382 kubelet[2761]: E1028 05:18:39.280277 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.283669 kubelet[2761]: E1028 05:18:39.283623 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.283902 kubelet[2761]: W1028 05:18:39.283752 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.283902 kubelet[2761]: E1028 05:18:39.283780 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.284147 kubelet[2761]: E1028 05:18:39.284136 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.284206 kubelet[2761]: W1028 05:18:39.284192 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.284327 kubelet[2761]: E1028 05:18:39.284250 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.284446 kubelet[2761]: E1028 05:18:39.284436 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.284494 kubelet[2761]: W1028 05:18:39.284486 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.284549 kubelet[2761]: E1028 05:18:39.284540 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.284925 kubelet[2761]: E1028 05:18:39.284823 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.284925 kubelet[2761]: W1028 05:18:39.284834 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.284925 kubelet[2761]: E1028 05:18:39.284844 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.285109 kubelet[2761]: E1028 05:18:39.285089 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.285198 kubelet[2761]: W1028 05:18:39.285186 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.285285 kubelet[2761]: E1028 05:18:39.285269 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.285596 kubelet[2761]: E1028 05:18:39.285494 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.285596 kubelet[2761]: W1028 05:18:39.285509 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.285596 kubelet[2761]: E1028 05:18:39.285521 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.287904 kubelet[2761]: E1028 05:18:39.287883 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.288624 kubelet[2761]: W1028 05:18:39.287991 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.288624 kubelet[2761]: E1028 05:18:39.288012 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.295341 kubelet[2761]: E1028 05:18:39.295310 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.295594 kubelet[2761]: W1028 05:18:39.295573 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.295707 kubelet[2761]: E1028 05:18:39.295694 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.296066 kubelet[2761]: E1028 05:18:39.296053 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.296155 kubelet[2761]: W1028 05:18:39.296145 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.296232 kubelet[2761]: E1028 05:18:39.296222 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.296559 kubelet[2761]: E1028 05:18:39.296538 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.296618 kubelet[2761]: W1028 05:18:39.296560 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.296618 kubelet[2761]: E1028 05:18:39.296576 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.297259 kubelet[2761]: E1028 05:18:39.297238 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.297259 kubelet[2761]: W1028 05:18:39.297256 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.297350 kubelet[2761]: E1028 05:18:39.297271 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.297970 kubelet[2761]: E1028 05:18:39.297920 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.297970 kubelet[2761]: W1028 05:18:39.297936 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.298343 kubelet[2761]: E1028 05:18:39.298324 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.298815 kubelet[2761]: E1028 05:18:39.298800 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.298867 kubelet[2761]: W1028 05:18:39.298815 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.298867 kubelet[2761]: E1028 05:18:39.298829 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.299759 kubelet[2761]: E1028 05:18:39.299743 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.299759 kubelet[2761]: W1028 05:18:39.299758 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.299888 kubelet[2761]: E1028 05:18:39.299772 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.299984 kubelet[2761]: E1028 05:18:39.299973 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.300034 kubelet[2761]: W1028 05:18:39.299984 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.300034 kubelet[2761]: E1028 05:18:39.299994 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.300889 kubelet[2761]: E1028 05:18:39.300867 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.300889 kubelet[2761]: W1028 05:18:39.300882 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.301134 kubelet[2761]: E1028 05:18:39.300896 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.301134 kubelet[2761]: E1028 05:18:39.301066 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.301134 kubelet[2761]: W1028 05:18:39.301073 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.301134 kubelet[2761]: E1028 05:18:39.301082 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.301433 kubelet[2761]: E1028 05:18:39.301419 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.301433 kubelet[2761]: W1028 05:18:39.301435 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.301433 kubelet[2761]: E1028 05:18:39.301447 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.302908 kubelet[2761]: E1028 05:18:39.302889 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.302908 kubelet[2761]: W1028 05:18:39.302907 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.303063 kubelet[2761]: E1028 05:18:39.302922 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.304162 kubelet[2761]: E1028 05:18:39.304144 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.304162 kubelet[2761]: W1028 05:18:39.304160 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.304275 kubelet[2761]: E1028 05:18:39.304174 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.304777 kubelet[2761]: E1028 05:18:39.304750 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.304777 kubelet[2761]: W1028 05:18:39.304766 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.304944 kubelet[2761]: E1028 05:18:39.304779 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.305858 kubelet[2761]: E1028 05:18:39.305842 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.305909 kubelet[2761]: W1028 05:18:39.305857 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.305909 kubelet[2761]: E1028 05:18:39.305873 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.307420 kubelet[2761]: E1028 05:18:39.307402 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.307768 kubelet[2761]: W1028 05:18:39.307524 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.307768 kubelet[2761]: E1028 05:18:39.307540 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.309952 kubelet[2761]: E1028 05:18:39.308820 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.309952 kubelet[2761]: W1028 05:18:39.308942 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.309952 kubelet[2761]: E1028 05:18:39.308959 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:39.313273 kubelet[2761]: E1028 05:18:39.312950 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:39.313405 kubelet[2761]: W1028 05:18:39.313387 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:39.313607 kubelet[2761]: E1028 05:18:39.313579 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.063535 kubelet[2761]: E1028 05:18:40.062960 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pj4dv" podUID="65eac80b-7114-46da-934f-c797b4afa603" Oct 28 05:18:40.236143 kubelet[2761]: I1028 05:18:40.234401 2761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 28 05:18:40.236143 kubelet[2761]: E1028 05:18:40.236104 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:40.295127 kubelet[2761]: E1028 05:18:40.295083 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.295918 kubelet[2761]: W1028 05:18:40.295699 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.295918 kubelet[2761]: E1028 05:18:40.295744 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.296262 kubelet[2761]: E1028 05:18:40.296246 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.296939 kubelet[2761]: W1028 05:18:40.296713 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.296939 kubelet[2761]: E1028 05:18:40.296757 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.297290 kubelet[2761]: E1028 05:18:40.297114 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.297290 kubelet[2761]: W1028 05:18:40.297127 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.297290 kubelet[2761]: E1028 05:18:40.297140 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.297581 kubelet[2761]: E1028 05:18:40.297567 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.297697 kubelet[2761]: W1028 05:18:40.297682 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.297866 kubelet[2761]: E1028 05:18:40.297756 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.298025 kubelet[2761]: E1028 05:18:40.298014 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.298125 kubelet[2761]: W1028 05:18:40.298111 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.298215 kubelet[2761]: E1028 05:18:40.298203 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.298602 kubelet[2761]: E1028 05:18:40.298500 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.298602 kubelet[2761]: W1028 05:18:40.298513 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.298602 kubelet[2761]: E1028 05:18:40.298524 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.298869 kubelet[2761]: E1028 05:18:40.298845 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.299061 kubelet[2761]: W1028 05:18:40.298937 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.299061 kubelet[2761]: E1028 05:18:40.298954 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.299448 kubelet[2761]: E1028 05:18:40.299428 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.299779 kubelet[2761]: W1028 05:18:40.299546 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.299779 kubelet[2761]: E1028 05:18:40.299572 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.300001 kubelet[2761]: E1028 05:18:40.299988 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.300162 kubelet[2761]: W1028 05:18:40.300062 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.300162 kubelet[2761]: E1028 05:18:40.300078 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.300348 kubelet[2761]: E1028 05:18:40.300336 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.300493 kubelet[2761]: W1028 05:18:40.300403 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.300493 kubelet[2761]: E1028 05:18:40.300418 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.300913 kubelet[2761]: E1028 05:18:40.300798 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.300913 kubelet[2761]: W1028 05:18:40.300813 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.300913 kubelet[2761]: E1028 05:18:40.300825 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.301250 kubelet[2761]: E1028 05:18:40.301134 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.301250 kubelet[2761]: W1028 05:18:40.301144 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.301250 kubelet[2761]: E1028 05:18:40.301154 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.301409 kubelet[2761]: E1028 05:18:40.301399 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.301468 kubelet[2761]: W1028 05:18:40.301458 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.301516 kubelet[2761]: E1028 05:18:40.301507 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.302074 kubelet[2761]: E1028 05:18:40.301831 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.302074 kubelet[2761]: W1028 05:18:40.301849 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.302074 kubelet[2761]: E1028 05:18:40.301863 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.302625 kubelet[2761]: E1028 05:18:40.302251 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.302625 kubelet[2761]: W1028 05:18:40.302264 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.302625 kubelet[2761]: E1028 05:18:40.302278 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.306863 kubelet[2761]: E1028 05:18:40.306827 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.306863 kubelet[2761]: W1028 05:18:40.306853 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.307107 kubelet[2761]: E1028 05:18:40.306877 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.307227 kubelet[2761]: E1028 05:18:40.307193 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.307322 kubelet[2761]: W1028 05:18:40.307253 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.307322 kubelet[2761]: E1028 05:18:40.307277 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.307778 kubelet[2761]: E1028 05:18:40.307758 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.307836 kubelet[2761]: W1028 05:18:40.307777 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.307836 kubelet[2761]: E1028 05:18:40.307794 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.308144 containerd[1596]: time="2025-10-28T05:18:40.308103844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:18:40.308906 kubelet[2761]: E1028 05:18:40.308884 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.309014 kubelet[2761]: W1028 05:18:40.308909 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.309014 kubelet[2761]: E1028 05:18:40.308925 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.309149 kubelet[2761]: E1028 05:18:40.309135 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.309149 kubelet[2761]: W1028 05:18:40.309148 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.309262 kubelet[2761]: E1028 05:18:40.309158 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.309344 kubelet[2761]: E1028 05:18:40.309333 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.309344 kubelet[2761]: W1028 05:18:40.309342 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.309446 kubelet[2761]: E1028 05:18:40.309350 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.309829 kubelet[2761]: E1028 05:18:40.309811 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.309829 kubelet[2761]: W1028 05:18:40.309826 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.309963 kubelet[2761]: E1028 05:18:40.309838 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.310251 kubelet[2761]: E1028 05:18:40.310237 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.310251 kubelet[2761]: W1028 05:18:40.310250 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.310313 kubelet[2761]: E1028 05:18:40.310262 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.310690 kubelet[2761]: E1028 05:18:40.310673 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.310690 kubelet[2761]: W1028 05:18:40.310688 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.310771 kubelet[2761]: E1028 05:18:40.310700 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.310997 containerd[1596]: time="2025-10-28T05:18:40.310907028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 28 05:18:40.311274 kubelet[2761]: E1028 05:18:40.311185 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.311274 kubelet[2761]: W1028 05:18:40.311203 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.311274 kubelet[2761]: E1028 05:18:40.311217 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.311756 kubelet[2761]: E1028 05:18:40.311699 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.311756 kubelet[2761]: W1028 05:18:40.311718 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.312022 kubelet[2761]: E1028 05:18:40.311934 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.312094 containerd[1596]: time="2025-10-28T05:18:40.312012777Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:18:40.312391 kubelet[2761]: E1028 05:18:40.312369 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.312566 kubelet[2761]: W1028 05:18:40.312481 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.312566 kubelet[2761]: E1028 05:18:40.312503 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.312863 kubelet[2761]: E1028 05:18:40.312846 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.313074 kubelet[2761]: W1028 05:18:40.312964 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.313074 kubelet[2761]: E1028 05:18:40.312986 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.313480 kubelet[2761]: E1028 05:18:40.313281 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.313710 kubelet[2761]: W1028 05:18:40.313549 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.313710 kubelet[2761]: E1028 05:18:40.313567 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.315051 kubelet[2761]: E1028 05:18:40.314990 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.315051 kubelet[2761]: W1028 05:18:40.315011 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.315051 kubelet[2761]: E1028 05:18:40.315028 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.316506 kubelet[2761]: E1028 05:18:40.316447 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.316506 kubelet[2761]: W1028 05:18:40.316471 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.316506 kubelet[2761]: E1028 05:18:40.316488 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.317233 kubelet[2761]: E1028 05:18:40.317139 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.317233 kubelet[2761]: W1028 05:18:40.317160 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.317233 kubelet[2761]: E1028 05:18:40.317178 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.317590 containerd[1596]: time="2025-10-28T05:18:40.317537954Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:18:40.317772 kubelet[2761]: E1028 05:18:40.317752 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:18:40.317819 kubelet[2761]: W1028 05:18:40.317774 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:18:40.317819 kubelet[2761]: E1028 05:18:40.317794 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:18:40.320064 containerd[1596]: time="2025-10-28T05:18:40.319994039Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.311180502s" Oct 28 05:18:40.320064 containerd[1596]: time="2025-10-28T05:18:40.320056609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 28 05:18:40.327075 containerd[1596]: time="2025-10-28T05:18:40.326995771Z" level=info msg="CreateContainer within sandbox \"c6a77203bf1fa8d4c49c75e864b78f704c15f34a297c17ce31d65b4f4e7514f4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 28 05:18:40.371674 containerd[1596]: time="2025-10-28T05:18:40.370823954Z" level=info msg="Container 6e069450d5bc9e844568f920ee358e66a9a58ae2f1a47d436a8e4f9f713c5804: CDI devices from CRI Config.CDIDevices: []" Oct 28 05:18:40.385278 containerd[1596]: time="2025-10-28T05:18:40.385183092Z" level=info msg="CreateContainer within sandbox \"c6a77203bf1fa8d4c49c75e864b78f704c15f34a297c17ce31d65b4f4e7514f4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6e069450d5bc9e844568f920ee358e66a9a58ae2f1a47d436a8e4f9f713c5804\"" Oct 28 05:18:40.386685 containerd[1596]: time="2025-10-28T05:18:40.386135110Z" level=info msg="StartContainer for \"6e069450d5bc9e844568f920ee358e66a9a58ae2f1a47d436a8e4f9f713c5804\"" Oct 28 05:18:40.388355 containerd[1596]: time="2025-10-28T05:18:40.388280080Z" level=info msg="connecting to shim 6e069450d5bc9e844568f920ee358e66a9a58ae2f1a47d436a8e4f9f713c5804" address="unix:///run/containerd/s/2a87c113b2acd5a51d346f38c17a8e4084d8479e5ad7eeacc8a1e8555dbb38d6" protocol=ttrpc version=3 Oct 28 05:18:40.426198 systemd[1]: Started cri-containerd-6e069450d5bc9e844568f920ee358e66a9a58ae2f1a47d436a8e4f9f713c5804.scope - libcontainer container 6e069450d5bc9e844568f920ee358e66a9a58ae2f1a47d436a8e4f9f713c5804. Oct 28 05:18:40.485355 containerd[1596]: time="2025-10-28T05:18:40.485306969Z" level=info msg="StartContainer for \"6e069450d5bc9e844568f920ee358e66a9a58ae2f1a47d436a8e4f9f713c5804\" returns successfully" Oct 28 05:18:40.504529 systemd[1]: cri-containerd-6e069450d5bc9e844568f920ee358e66a9a58ae2f1a47d436a8e4f9f713c5804.scope: Deactivated successfully. Oct 28 05:18:40.505314 systemd[1]: cri-containerd-6e069450d5bc9e844568f920ee358e66a9a58ae2f1a47d436a8e4f9f713c5804.scope: Consumed 39ms CPU time, 6.2M memory peak, 4.6M written to disk. Oct 28 05:18:40.510394 containerd[1596]: time="2025-10-28T05:18:40.510348955Z" level=info msg="received exit event container_id:\"6e069450d5bc9e844568f920ee358e66a9a58ae2f1a47d436a8e4f9f713c5804\" id:\"6e069450d5bc9e844568f920ee358e66a9a58ae2f1a47d436a8e4f9f713c5804\" pid:3541 exited_at:{seconds:1761628720 nanos:509968612}" Oct 28 05:18:40.559115 containerd[1596]: time="2025-10-28T05:18:40.559026904Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e069450d5bc9e844568f920ee358e66a9a58ae2f1a47d436a8e4f9f713c5804\" id:\"6e069450d5bc9e844568f920ee358e66a9a58ae2f1a47d436a8e4f9f713c5804\" pid:3541 exited_at:{seconds:1761628720 nanos:509968612}" Oct 28 05:18:40.566410 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e069450d5bc9e844568f920ee358e66a9a58ae2f1a47d436a8e4f9f713c5804-rootfs.mount: Deactivated successfully. Oct 28 05:18:41.239975 kubelet[2761]: E1028 05:18:41.239885 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:41.244843 containerd[1596]: time="2025-10-28T05:18:41.244090923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 28 05:18:41.263606 kubelet[2761]: I1028 05:18:41.260970 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7bd4c69547-vd4hl" podStartSLOduration=3.690152752 podStartE2EDuration="6.260952897s" podCreationTimestamp="2025-10-28 05:18:35 +0000 UTC" firstStartedPulling="2025-10-28 05:18:36.440250064 +0000 UTC m=+27.569250930" lastFinishedPulling="2025-10-28 05:18:39.011050189 +0000 UTC m=+30.140051075" observedRunningTime="2025-10-28 05:18:39.289223253 +0000 UTC m=+30.418224138" watchObservedRunningTime="2025-10-28 05:18:41.260952897 +0000 UTC m=+32.389953784" Oct 28 05:18:42.062765 kubelet[2761]: E1028 05:18:42.062707 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pj4dv" podUID="65eac80b-7114-46da-934f-c797b4afa603" Oct 28 05:18:44.063688 kubelet[2761]: E1028 05:18:44.062625 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pj4dv" podUID="65eac80b-7114-46da-934f-c797b4afa603" Oct 28 05:18:44.360026 containerd[1596]: time="2025-10-28T05:18:44.359518927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:18:44.361029 containerd[1596]: time="2025-10-28T05:18:44.360486523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 28 05:18:44.361669 containerd[1596]: time="2025-10-28T05:18:44.361610752Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:18:44.364199 containerd[1596]: time="2025-10-28T05:18:44.363755905Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:18:44.364844 containerd[1596]: time="2025-10-28T05:18:44.364807032Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.120658525s" Oct 28 05:18:44.364967 containerd[1596]: time="2025-10-28T05:18:44.364952807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 28 05:18:44.372894 containerd[1596]: time="2025-10-28T05:18:44.372782681Z" level=info msg="CreateContainer within sandbox \"c6a77203bf1fa8d4c49c75e864b78f704c15f34a297c17ce31d65b4f4e7514f4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 28 05:18:44.386088 containerd[1596]: time="2025-10-28T05:18:44.386036289Z" level=info msg="Container 3bec02abadc0b8ca82bf93ba6c9162c1708e545cb9c3e86edc65fc0b1599b898: CDI devices from CRI Config.CDIDevices: []" Oct 28 05:18:44.394725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3432088287.mount: Deactivated successfully. Oct 28 05:18:44.403208 containerd[1596]: time="2025-10-28T05:18:44.403139228Z" level=info msg="CreateContainer within sandbox \"c6a77203bf1fa8d4c49c75e864b78f704c15f34a297c17ce31d65b4f4e7514f4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3bec02abadc0b8ca82bf93ba6c9162c1708e545cb9c3e86edc65fc0b1599b898\"" Oct 28 05:18:44.404182 containerd[1596]: time="2025-10-28T05:18:44.404142374Z" level=info msg="StartContainer for \"3bec02abadc0b8ca82bf93ba6c9162c1708e545cb9c3e86edc65fc0b1599b898\"" Oct 28 05:18:44.406537 containerd[1596]: time="2025-10-28T05:18:44.406492181Z" level=info msg="connecting to shim 3bec02abadc0b8ca82bf93ba6c9162c1708e545cb9c3e86edc65fc0b1599b898" address="unix:///run/containerd/s/2a87c113b2acd5a51d346f38c17a8e4084d8479e5ad7eeacc8a1e8555dbb38d6" protocol=ttrpc version=3 Oct 28 05:18:44.441980 systemd[1]: Started cri-containerd-3bec02abadc0b8ca82bf93ba6c9162c1708e545cb9c3e86edc65fc0b1599b898.scope - libcontainer container 3bec02abadc0b8ca82bf93ba6c9162c1708e545cb9c3e86edc65fc0b1599b898. Oct 28 05:18:44.497976 containerd[1596]: time="2025-10-28T05:18:44.497926942Z" level=info msg="StartContainer for \"3bec02abadc0b8ca82bf93ba6c9162c1708e545cb9c3e86edc65fc0b1599b898\" returns successfully" Oct 28 05:18:45.218636 systemd[1]: cri-containerd-3bec02abadc0b8ca82bf93ba6c9162c1708e545cb9c3e86edc65fc0b1599b898.scope: Deactivated successfully. Oct 28 05:18:45.219582 systemd[1]: cri-containerd-3bec02abadc0b8ca82bf93ba6c9162c1708e545cb9c3e86edc65fc0b1599b898.scope: Consumed 695ms CPU time, 168.6M memory peak, 8.4M read from disk, 171.3M written to disk. Oct 28 05:18:45.222883 containerd[1596]: time="2025-10-28T05:18:45.222680605Z" level=info msg="received exit event container_id:\"3bec02abadc0b8ca82bf93ba6c9162c1708e545cb9c3e86edc65fc0b1599b898\" id:\"3bec02abadc0b8ca82bf93ba6c9162c1708e545cb9c3e86edc65fc0b1599b898\" pid:3598 exited_at:{seconds:1761628725 nanos:221802875}" Oct 28 05:18:45.225165 containerd[1596]: time="2025-10-28T05:18:45.225109319Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3bec02abadc0b8ca82bf93ba6c9162c1708e545cb9c3e86edc65fc0b1599b898\" id:\"3bec02abadc0b8ca82bf93ba6c9162c1708e545cb9c3e86edc65fc0b1599b898\" pid:3598 exited_at:{seconds:1761628725 nanos:221802875}" Oct 28 05:18:45.282429 kubelet[2761]: E1028 05:18:45.281211 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:45.298889 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bec02abadc0b8ca82bf93ba6c9162c1708e545cb9c3e86edc65fc0b1599b898-rootfs.mount: Deactivated successfully. Oct 28 05:18:45.353734 kubelet[2761]: I1028 05:18:45.353382 2761 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 28 05:18:45.409541 systemd[1]: Created slice kubepods-burstable-pod10a83b40_012d_4ade_8ade_595315f777ca.slice - libcontainer container kubepods-burstable-pod10a83b40_012d_4ade_8ade_595315f777ca.slice. Oct 28 05:18:45.440725 systemd[1]: Created slice kubepods-besteffort-pod7a9c518f_3d77_4deb_b369_d23040ed89ce.slice - libcontainer container kubepods-besteffort-pod7a9c518f_3d77_4deb_b369_d23040ed89ce.slice. Oct 28 05:18:45.461745 systemd[1]: Created slice kubepods-burstable-pod58f11e77_8c5d_47b6_8552_cf2d1e5f275c.slice - libcontainer container kubepods-burstable-pod58f11e77_8c5d_47b6_8552_cf2d1e5f275c.slice. Oct 28 05:18:45.470737 systemd[1]: Created slice kubepods-besteffort-pod3e214f89_6d8a_48a0_9125_ff9ee356160a.slice - libcontainer container kubepods-besteffort-pod3e214f89_6d8a_48a0_9125_ff9ee356160a.slice. Oct 28 05:18:45.483933 kubelet[2761]: I1028 05:18:45.483867 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e214f89-6d8a-48a0-9125-ff9ee356160a-tigera-ca-bundle\") pod \"calico-kube-controllers-c4bfd986d-xlpx8\" (UID: \"3e214f89-6d8a-48a0-9125-ff9ee356160a\") " pod="calico-system/calico-kube-controllers-c4bfd986d-xlpx8" Oct 28 05:18:45.483933 kubelet[2761]: I1028 05:18:45.483938 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg6kt\" (UniqueName: \"kubernetes.io/projected/10a83b40-012d-4ade-8ade-595315f777ca-kube-api-access-bg6kt\") pod \"coredns-66bc5c9577-z42gp\" (UID: \"10a83b40-012d-4ade-8ade-595315f777ca\") " pod="kube-system/coredns-66bc5c9577-z42gp" Oct 28 05:18:45.484163 kubelet[2761]: I1028 05:18:45.483974 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10a83b40-012d-4ade-8ade-595315f777ca-config-volume\") pod \"coredns-66bc5c9577-z42gp\" (UID: \"10a83b40-012d-4ade-8ade-595315f777ca\") " pod="kube-system/coredns-66bc5c9577-z42gp" Oct 28 05:18:45.484163 kubelet[2761]: I1028 05:18:45.484004 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7a9c518f-3d77-4deb-b369-d23040ed89ce-whisker-backend-key-pair\") pod \"whisker-65f46fbbcc-fd4lz\" (UID: \"7a9c518f-3d77-4deb-b369-d23040ed89ce\") " pod="calico-system/whisker-65f46fbbcc-fd4lz" Oct 28 05:18:45.484163 kubelet[2761]: I1028 05:18:45.484027 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n97g7\" (UniqueName: \"kubernetes.io/projected/7a9c518f-3d77-4deb-b369-d23040ed89ce-kube-api-access-n97g7\") pod \"whisker-65f46fbbcc-fd4lz\" (UID: \"7a9c518f-3d77-4deb-b369-d23040ed89ce\") " pod="calico-system/whisker-65f46fbbcc-fd4lz" Oct 28 05:18:45.484163 kubelet[2761]: I1028 05:18:45.484057 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm2tq\" (UniqueName: \"kubernetes.io/projected/58f11e77-8c5d-47b6-8552-cf2d1e5f275c-kube-api-access-bm2tq\") pod \"coredns-66bc5c9577-8fdqb\" (UID: \"58f11e77-8c5d-47b6-8552-cf2d1e5f275c\") " pod="kube-system/coredns-66bc5c9577-8fdqb" Oct 28 05:18:45.484163 kubelet[2761]: I1028 05:18:45.484084 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw2n9\" (UniqueName: \"kubernetes.io/projected/3e214f89-6d8a-48a0-9125-ff9ee356160a-kube-api-access-dw2n9\") pod \"calico-kube-controllers-c4bfd986d-xlpx8\" (UID: \"3e214f89-6d8a-48a0-9125-ff9ee356160a\") " pod="calico-system/calico-kube-controllers-c4bfd986d-xlpx8" Oct 28 05:18:45.484312 kubelet[2761]: I1028 05:18:45.484130 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a9c518f-3d77-4deb-b369-d23040ed89ce-whisker-ca-bundle\") pod \"whisker-65f46fbbcc-fd4lz\" (UID: \"7a9c518f-3d77-4deb-b369-d23040ed89ce\") " pod="calico-system/whisker-65f46fbbcc-fd4lz" Oct 28 05:18:45.484312 kubelet[2761]: I1028 05:18:45.484159 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58f11e77-8c5d-47b6-8552-cf2d1e5f275c-config-volume\") pod \"coredns-66bc5c9577-8fdqb\" (UID: \"58f11e77-8c5d-47b6-8552-cf2d1e5f275c\") " pod="kube-system/coredns-66bc5c9577-8fdqb" Oct 28 05:18:45.506862 systemd[1]: Created slice kubepods-besteffort-pod2dffcfa9_84ba_4077_b168_bc5f5cb035a9.slice - libcontainer container kubepods-besteffort-pod2dffcfa9_84ba_4077_b168_bc5f5cb035a9.slice. Oct 28 05:18:45.530339 systemd[1]: Created slice kubepods-besteffort-pod1c4104d9_7ba6_4171_8c43_b6c170ee1774.slice - libcontainer container kubepods-besteffort-pod1c4104d9_7ba6_4171_8c43_b6c170ee1774.slice. Oct 28 05:18:45.546408 systemd[1]: Created slice kubepods-besteffort-pod5a48253c_a0f4_4a3b_ba79_be0ea189e322.slice - libcontainer container kubepods-besteffort-pod5a48253c_a0f4_4a3b_ba79_be0ea189e322.slice. Oct 28 05:18:45.585391 kubelet[2761]: I1028 05:18:45.585322 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c4104d9-7ba6-4171-8c43-b6c170ee1774-config\") pod \"goldmane-7c778bb748-2jltp\" (UID: \"1c4104d9-7ba6-4171-8c43-b6c170ee1774\") " pod="calico-system/goldmane-7c778bb748-2jltp" Oct 28 05:18:45.585391 kubelet[2761]: I1028 05:18:45.585382 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c4104d9-7ba6-4171-8c43-b6c170ee1774-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-2jltp\" (UID: \"1c4104d9-7ba6-4171-8c43-b6c170ee1774\") " pod="calico-system/goldmane-7c778bb748-2jltp" Oct 28 05:18:45.585653 kubelet[2761]: I1028 05:18:45.585441 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkpfg\" (UniqueName: \"kubernetes.io/projected/1c4104d9-7ba6-4171-8c43-b6c170ee1774-kube-api-access-wkpfg\") pod \"goldmane-7c778bb748-2jltp\" (UID: \"1c4104d9-7ba6-4171-8c43-b6c170ee1774\") " pod="calico-system/goldmane-7c778bb748-2jltp" Oct 28 05:18:45.585653 kubelet[2761]: I1028 05:18:45.585533 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68sgr\" (UniqueName: \"kubernetes.io/projected/5a48253c-a0f4-4a3b-ba79-be0ea189e322-kube-api-access-68sgr\") pod \"calico-apiserver-85d779dd4f-td9lt\" (UID: \"5a48253c-a0f4-4a3b-ba79-be0ea189e322\") " pod="calico-apiserver/calico-apiserver-85d779dd4f-td9lt" Oct 28 05:18:45.585653 kubelet[2761]: I1028 05:18:45.585617 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2dffcfa9-84ba-4077-b168-bc5f5cb035a9-calico-apiserver-certs\") pod \"calico-apiserver-85d779dd4f-74h24\" (UID: \"2dffcfa9-84ba-4077-b168-bc5f5cb035a9\") " pod="calico-apiserver/calico-apiserver-85d779dd4f-74h24" Oct 28 05:18:45.586693 kubelet[2761]: I1028 05:18:45.586556 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1c4104d9-7ba6-4171-8c43-b6c170ee1774-goldmane-key-pair\") pod \"goldmane-7c778bb748-2jltp\" (UID: \"1c4104d9-7ba6-4171-8c43-b6c170ee1774\") " pod="calico-system/goldmane-7c778bb748-2jltp" Oct 28 05:18:45.587934 kubelet[2761]: I1028 05:18:45.586738 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5a48253c-a0f4-4a3b-ba79-be0ea189e322-calico-apiserver-certs\") pod \"calico-apiserver-85d779dd4f-td9lt\" (UID: \"5a48253c-a0f4-4a3b-ba79-be0ea189e322\") " pod="calico-apiserver/calico-apiserver-85d779dd4f-td9lt" Oct 28 05:18:45.587934 kubelet[2761]: I1028 05:18:45.586779 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmgpr\" (UniqueName: \"kubernetes.io/projected/2dffcfa9-84ba-4077-b168-bc5f5cb035a9-kube-api-access-tmgpr\") pod \"calico-apiserver-85d779dd4f-74h24\" (UID: \"2dffcfa9-84ba-4077-b168-bc5f5cb035a9\") " pod="calico-apiserver/calico-apiserver-85d779dd4f-74h24" Oct 28 05:18:45.732964 kubelet[2761]: E1028 05:18:45.731757 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:45.734239 containerd[1596]: time="2025-10-28T05:18:45.734168584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-z42gp,Uid:10a83b40-012d-4ade-8ade-595315f777ca,Namespace:kube-system,Attempt:0,}" Oct 28 05:18:45.757619 containerd[1596]: time="2025-10-28T05:18:45.757547581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65f46fbbcc-fd4lz,Uid:7a9c518f-3d77-4deb-b369-d23040ed89ce,Namespace:calico-system,Attempt:0,}" Oct 28 05:18:45.813447 kubelet[2761]: E1028 05:18:45.813184 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:45.815779 containerd[1596]: time="2025-10-28T05:18:45.814947141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8fdqb,Uid:58f11e77-8c5d-47b6-8552-cf2d1e5f275c,Namespace:kube-system,Attempt:0,}" Oct 28 05:18:45.820104 containerd[1596]: time="2025-10-28T05:18:45.820044765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4bfd986d-xlpx8,Uid:3e214f89-6d8a-48a0-9125-ff9ee356160a,Namespace:calico-system,Attempt:0,}" Oct 28 05:18:45.845870 containerd[1596]: time="2025-10-28T05:18:45.845756839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-2jltp,Uid:1c4104d9-7ba6-4171-8c43-b6c170ee1774,Namespace:calico-system,Attempt:0,}" Oct 28 05:18:45.870377 containerd[1596]: time="2025-10-28T05:18:45.870016061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d779dd4f-74h24,Uid:2dffcfa9-84ba-4077-b168-bc5f5cb035a9,Namespace:calico-apiserver,Attempt:0,}" Oct 28 05:18:45.878207 containerd[1596]: time="2025-10-28T05:18:45.878149168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d779dd4f-td9lt,Uid:5a48253c-a0f4-4a3b-ba79-be0ea189e322,Namespace:calico-apiserver,Attempt:0,}" Oct 28 05:18:46.077952 systemd[1]: Created slice kubepods-besteffort-pod65eac80b_7114_46da_934f_c797b4afa603.slice - libcontainer container kubepods-besteffort-pod65eac80b_7114_46da_934f_c797b4afa603.slice. Oct 28 05:18:46.087407 containerd[1596]: time="2025-10-28T05:18:46.087346560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pj4dv,Uid:65eac80b-7114-46da-934f-c797b4afa603,Namespace:calico-system,Attempt:0,}" Oct 28 05:18:46.252563 containerd[1596]: time="2025-10-28T05:18:46.252326519Z" level=error msg="Failed to destroy network for sandbox \"299887d5a864d5f46be0d8e1ce6a7ab1c438b27bab3e4029c02d4dedbc27d0c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:18:46.252563 containerd[1596]: time="2025-10-28T05:18:46.252367377Z" level=error msg="Failed to destroy network for sandbox \"f4688a1a8d85c66ae947e0c00df52da20325aed56d6675724acc15391906f54a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:18:46.253996 containerd[1596]: time="2025-10-28T05:18:46.253919997Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65f46fbbcc-fd4lz,Uid:7a9c518f-3d77-4deb-b369-d23040ed89ce,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4688a1a8d85c66ae947e0c00df52da20325aed56d6675724acc15391906f54a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:18:46.254808 kubelet[2761]: E1028 05:18:46.254732 2761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4688a1a8d85c66ae947e0c00df52da20325aed56d6675724acc15391906f54a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:18:46.254980 kubelet[2761]: E1028 05:18:46.254850 2761 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4688a1a8d85c66ae947e0c00df52da20325aed56d6675724acc15391906f54a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65f46fbbcc-fd4lz" Oct 28 05:18:46.256970 kubelet[2761]: E1028 05:18:46.254884 2761 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4688a1a8d85c66ae947e0c00df52da20325aed56d6675724acc15391906f54a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65f46fbbcc-fd4lz" Oct 28 05:18:46.257808 kubelet[2761]: E1028 05:18:46.257159 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65f46fbbcc-fd4lz_calico-system(7a9c518f-3d77-4deb-b369-d23040ed89ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65f46fbbcc-fd4lz_calico-system(7a9c518f-3d77-4deb-b369-d23040ed89ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4688a1a8d85c66ae947e0c00df52da20325aed56d6675724acc15391906f54a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65f46fbbcc-fd4lz" podUID="7a9c518f-3d77-4deb-b369-d23040ed89ce" Oct 28 05:18:46.261560 containerd[1596]: time="2025-10-28T05:18:46.261478569Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-z42gp,Uid:10a83b40-012d-4ade-8ade-595315f777ca,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"299887d5a864d5f46be0d8e1ce6a7ab1c438b27bab3e4029c02d4dedbc27d0c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:18:46.290918 kubelet[2761]: E1028 05:18:46.290342 2761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"299887d5a864d5f46be0d8e1ce6a7ab1c438b27bab3e4029c02d4dedbc27d0c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:18:46.290918 kubelet[2761]: E1028 05:18:46.290433 2761 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"299887d5a864d5f46be0d8e1ce6a7ab1c438b27bab3e4029c02d4dedbc27d0c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-z42gp" Oct 28 05:18:46.290918 kubelet[2761]: E1028 05:18:46.290460 2761 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"299887d5a864d5f46be0d8e1ce6a7ab1c438b27bab3e4029c02d4dedbc27d0c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-z42gp" Oct 28 05:18:46.291568 kubelet[2761]: E1028 05:18:46.290536 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-z42gp_kube-system(10a83b40-012d-4ade-8ade-595315f777ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-z42gp_kube-system(10a83b40-012d-4ade-8ade-595315f777ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"299887d5a864d5f46be0d8e1ce6a7ab1c438b27bab3e4029c02d4dedbc27d0c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-z42gp" podUID="10a83b40-012d-4ade-8ade-595315f777ca" Oct 28 05:18:46.305534 containerd[1596]: time="2025-10-28T05:18:46.305465426Z" level=error msg="Failed to destroy network for sandbox \"bd53cb77b4310506e895a28a5de79421cded3baef73f23966246a4cca5152c58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:18:46.311551 containerd[1596]: time="2025-10-28T05:18:46.311415370Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8fdqb,Uid:58f11e77-8c5d-47b6-8552-cf2d1e5f275c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd53cb77b4310506e895a28a5de79421cded3baef73f23966246a4cca5152c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:18:46.312523 kubelet[2761]: E1028 05:18:46.311764 2761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd53cb77b4310506e895a28a5de79421cded3baef73f23966246a4cca5152c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:18:46.312523 kubelet[2761]: E1028 05:18:46.311840 2761 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd53cb77b4310506e895a28a5de79421cded3baef73f23966246a4cca5152c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-8fdqb" Oct 28 05:18:46.312523 kubelet[2761]: E1028 05:18:46.311867 2761 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd53cb77b4310506e895a28a5de79421cded3baef73f23966246a4cca5152c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-8fdqb" Oct 28 05:18:46.312720 kubelet[2761]: E1028 05:18:46.311937 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-8fdqb_kube-system(58f11e77-8c5d-47b6-8552-cf2d1e5f275c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-8fdqb_kube-system(58f11e77-8c5d-47b6-8552-cf2d1e5f275c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd53cb77b4310506e895a28a5de79421cded3baef73f23966246a4cca5152c58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-8fdqb" podUID="58f11e77-8c5d-47b6-8552-cf2d1e5f275c" Oct 28 05:18:46.323470 kubelet[2761]: E1028 05:18:46.323425 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:46.330685 containerd[1596]: time="2025-10-28T05:18:46.330511631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 28 05:18:46.382026 containerd[1596]: time="2025-10-28T05:18:46.381966741Z" level=error msg="Failed to destroy network for sandbox \"18d666ed354ba80a2b4c5825becfd13a3cbde2c1fac1947b301c71342162e1b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:18:46.383089 containerd[1596]: time="2025-10-28T05:18:46.383042302Z" level=error msg="Failed to destroy network for sandbox \"711b13865ebba241670e5fc7ae9b9cb800fb0a6bf26aed58530bf6f99cfac974\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:18:46.383942 containerd[1596]: time="2025-10-28T05:18:46.383869849Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d779dd4f-74h24,Uid:2dffcfa9-84ba-4077-b168-bc5f5cb035a9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"711b13865ebba241670e5fc7ae9b9cb800fb0a6bf26aed58530bf6f99cfac974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:18:46.384613 containerd[1596]: time="2025-10-28T05:18:46.384289887Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4bfd986d-xlpx8,Uid:3e214f89-6d8a-48a0-9125-ff9ee356160a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"18d666ed354ba80a2b4c5825becfd13a3cbde2c1fac1947b301c71342162e1b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:18:46.386009 kubelet[2761]: E1028 05:18:46.384929 2761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18d666ed354ba80a2b4c5825becfd13a3cbde2c1fac1947b301c71342162e1b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:18:46.386009 kubelet[2761]: E1028 05:18:46.385012 2761 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18d666ed354ba80a2b4c5825becfd13a3cbde2c1fac1947b301c71342162e1b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c4bfd986d-xlpx8" Oct 28 05:18:46.386009 kubelet[2761]: E1028 05:18:46.385040 2761 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18d666ed354ba80a2b4c5825becfd13a3cbde2c1fac1947b301c71342162e1b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c4bfd986d-xlpx8" Oct 28 05:18:46.386182 kubelet[2761]: E1028 05:18:46.385111 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c4bfd986d-xlpx8_calico-system(3e214f89-6d8a-48a0-9125-ff9ee356160a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c4bfd986d-xlpx8_calico-system(3e214f89-6d8a-48a0-9125-ff9ee356160a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18d666ed354ba80a2b4c5825becfd13a3cbde2c1fac1947b301c71342162e1b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c4bfd986d-xlpx8" podUID="3e214f89-6d8a-48a0-9125-ff9ee356160a" Oct 28 05:18:46.386182 kubelet[2761]: E1028 05:18:46.384571 2761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"711b13865ebba241670e5fc7ae9b9cb800fb0a6bf26aed58530bf6f99cfac974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:18:46.386182 kubelet[2761]: E1028 05:18:46.385787 2761 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"711b13865ebba241670e5fc7ae9b9cb800fb0a6bf26aed58530bf6f99cfac974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85d779dd4f-74h24" Oct 28 05:18:46.386354 kubelet[2761]: E1028 05:18:46.385808 2761 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"711b13865ebba241670e5fc7ae9b9cb800fb0a6bf26aed58530bf6f99cfac974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85d779dd4f-74h24" Oct 28 05:18:46.386354 kubelet[2761]: E1028 05:18:46.385879 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85d779dd4f-74h24_calico-apiserver(2dffcfa9-84ba-4077-b168-bc5f5cb035a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85d779dd4f-74h24_calico-apiserver(2dffcfa9-84ba-4077-b168-bc5f5cb035a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"711b13865ebba241670e5fc7ae9b9cb800fb0a6bf26aed58530bf6f99cfac974\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85d779dd4f-74h24" podUID="2dffcfa9-84ba-4077-b168-bc5f5cb035a9" Oct 28 05:18:46.388259 containerd[1596]: time="2025-10-28T05:18:46.388189528Z" level=error msg="Failed to destroy network for sandbox \"0020528a9b59b3f1524896eb2ba65e8daf77f28af4126fe1bffd43b7f64e545e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:18:46.389302 containerd[1596]: time="2025-10-28T05:18:46.389194465Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d779dd4f-td9lt,Uid:5a48253c-a0f4-4a3b-ba79-be0ea189e322,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0020528a9b59b3f1524896eb2ba65e8daf77f28af4126fe1bffd43b7f64e545e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:18:46.389884 kubelet[2761]: E1028 05:18:46.389833 2761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0020528a9b59b3f1524896eb2ba65e8daf77f28af4126fe1bffd43b7f64e545e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:18:46.390013 kubelet[2761]: E1028 05:18:46.389912 2761 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0020528a9b59b3f1524896eb2ba65e8daf77f28af4126fe1bffd43b7f64e545e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85d779dd4f-td9lt" Oct 28 05:18:46.390013 kubelet[2761]: E1028 05:18:46.389940 2761 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0020528a9b59b3f1524896eb2ba65e8daf77f28af4126fe1bffd43b7f64e545e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85d779dd4f-td9lt" Oct 28 05:18:46.391574 kubelet[2761]: E1028 05:18:46.391515 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85d779dd4f-td9lt_calico-apiserver(5a48253c-a0f4-4a3b-ba79-be0ea189e322)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85d779dd4f-td9lt_calico-apiserver(5a48253c-a0f4-4a3b-ba79-be0ea189e322)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0020528a9b59b3f1524896eb2ba65e8daf77f28af4126fe1bffd43b7f64e545e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85d779dd4f-td9lt" podUID="5a48253c-a0f4-4a3b-ba79-be0ea189e322" Oct 28 05:18:46.414572 containerd[1596]: time="2025-10-28T05:18:46.414437024Z" level=error msg="Failed to destroy network for sandbox \"1fb2425584f67c1feeb6d68607b8bffae050b4bd721c4e67874fe699b51a94ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:18:46.415784 containerd[1596]: time="2025-10-28T05:18:46.415657725Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-2jltp,Uid:1c4104d9-7ba6-4171-8c43-b6c170ee1774,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fb2425584f67c1feeb6d68607b8bffae050b4bd721c4e67874fe699b51a94ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:18:46.416471 kubelet[2761]: E1028 05:18:46.416065 2761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fb2425584f67c1feeb6d68607b8bffae050b4bd721c4e67874fe699b51a94ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:18:46.416471 kubelet[2761]: E1028 05:18:46.416124 2761 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fb2425584f67c1feeb6d68607b8bffae050b4bd721c4e67874fe699b51a94ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-2jltp" Oct 28 05:18:46.416471 kubelet[2761]: E1028 05:18:46.416144 2761 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fb2425584f67c1feeb6d68607b8bffae050b4bd721c4e67874fe699b51a94ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-2jltp" Oct 28 05:18:46.416785 kubelet[2761]: E1028 05:18:46.416197 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-2jltp_calico-system(1c4104d9-7ba6-4171-8c43-b6c170ee1774)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-2jltp_calico-system(1c4104d9-7ba6-4171-8c43-b6c170ee1774)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fb2425584f67c1feeb6d68607b8bffae050b4bd721c4e67874fe699b51a94ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-2jltp" podUID="1c4104d9-7ba6-4171-8c43-b6c170ee1774" Oct 28 05:18:46.451743 containerd[1596]: time="2025-10-28T05:18:46.451634193Z" level=error msg="Failed to destroy network for sandbox \"22bb0012443b6bcd0a20f3477449bd19ac19404dd9668783f90ff40d535d9961\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:18:46.452919 containerd[1596]: time="2025-10-28T05:18:46.452858807Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pj4dv,Uid:65eac80b-7114-46da-934f-c797b4afa603,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"22bb0012443b6bcd0a20f3477449bd19ac19404dd9668783f90ff40d535d9961\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:18:46.454398 kubelet[2761]: E1028 05:18:46.453303 2761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22bb0012443b6bcd0a20f3477449bd19ac19404dd9668783f90ff40d535d9961\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:18:46.454398 kubelet[2761]: E1028 05:18:46.453376 2761 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22bb0012443b6bcd0a20f3477449bd19ac19404dd9668783f90ff40d535d9961\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pj4dv" Oct 28 05:18:46.454398 kubelet[2761]: E1028 05:18:46.453396 2761 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22bb0012443b6bcd0a20f3477449bd19ac19404dd9668783f90ff40d535d9961\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pj4dv" Oct 28 05:18:46.454556 kubelet[2761]: E1028 05:18:46.453456 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pj4dv_calico-system(65eac80b-7114-46da-934f-c797b4afa603)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pj4dv_calico-system(65eac80b-7114-46da-934f-c797b4afa603)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22bb0012443b6bcd0a20f3477449bd19ac19404dd9668783f90ff40d535d9961\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pj4dv" podUID="65eac80b-7114-46da-934f-c797b4afa603" Oct 28 05:18:52.519973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2164513788.mount: Deactivated successfully. Oct 28 05:18:52.617663 containerd[1596]: time="2025-10-28T05:18:52.617492620Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 28 05:18:52.632833 containerd[1596]: time="2025-10-28T05:18:52.632758571Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.300498992s" Oct 28 05:18:52.633542 containerd[1596]: time="2025-10-28T05:18:52.633068843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 28 05:18:52.667911 containerd[1596]: time="2025-10-28T05:18:52.667490280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:18:52.679480 containerd[1596]: time="2025-10-28T05:18:52.679347236Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:18:52.680004 containerd[1596]: time="2025-10-28T05:18:52.679966310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:18:52.691901 containerd[1596]: time="2025-10-28T05:18:52.691757843Z" level=info msg="CreateContainer within sandbox \"c6a77203bf1fa8d4c49c75e864b78f704c15f34a297c17ce31d65b4f4e7514f4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 28 05:18:52.800405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3691329470.mount: Deactivated successfully. Oct 28 05:18:52.800759 containerd[1596]: time="2025-10-28T05:18:52.800717789Z" level=info msg="Container 4db498e89a2197db3158e2b4616e41b8baf8f99041d1c4c335b4618cddff042f: CDI devices from CRI Config.CDIDevices: []" Oct 28 05:18:52.893657 containerd[1596]: time="2025-10-28T05:18:52.893575753Z" level=info msg="CreateContainer within sandbox \"c6a77203bf1fa8d4c49c75e864b78f704c15f34a297c17ce31d65b4f4e7514f4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4db498e89a2197db3158e2b4616e41b8baf8f99041d1c4c335b4618cddff042f\"" Oct 28 05:18:52.894614 containerd[1596]: time="2025-10-28T05:18:52.894450007Z" level=info msg="StartContainer for \"4db498e89a2197db3158e2b4616e41b8baf8f99041d1c4c335b4618cddff042f\"" Oct 28 05:18:52.900981 containerd[1596]: time="2025-10-28T05:18:52.900934788Z" level=info msg="connecting to shim 4db498e89a2197db3158e2b4616e41b8baf8f99041d1c4c335b4618cddff042f" address="unix:///run/containerd/s/2a87c113b2acd5a51d346f38c17a8e4084d8479e5ad7eeacc8a1e8555dbb38d6" protocol=ttrpc version=3 Oct 28 05:18:53.080206 systemd[1]: Started cri-containerd-4db498e89a2197db3158e2b4616e41b8baf8f99041d1c4c335b4618cddff042f.scope - libcontainer container 4db498e89a2197db3158e2b4616e41b8baf8f99041d1c4c335b4618cddff042f. Oct 28 05:18:53.181685 containerd[1596]: time="2025-10-28T05:18:53.180885329Z" level=info msg="StartContainer for \"4db498e89a2197db3158e2b4616e41b8baf8f99041d1c4c335b4618cddff042f\" returns successfully" Oct 28 05:18:53.287820 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 28 05:18:53.289883 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 28 05:18:53.358270 kubelet[2761]: E1028 05:18:53.358024 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:53.387928 kubelet[2761]: I1028 05:18:53.387848 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-95tdc" podStartSLOduration=1.312854269 podStartE2EDuration="17.387826972s" podCreationTimestamp="2025-10-28 05:18:36 +0000 UTC" firstStartedPulling="2025-10-28 05:18:36.559138256 +0000 UTC m=+27.688139119" lastFinishedPulling="2025-10-28 05:18:52.634110959 +0000 UTC m=+43.763111822" observedRunningTime="2025-10-28 05:18:53.387544095 +0000 UTC m=+44.516544989" watchObservedRunningTime="2025-10-28 05:18:53.387826972 +0000 UTC m=+44.516827858" Oct 28 05:18:53.649210 kubelet[2761]: I1028 05:18:53.649064 2761 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n97g7\" (UniqueName: \"kubernetes.io/projected/7a9c518f-3d77-4deb-b369-d23040ed89ce-kube-api-access-n97g7\") pod \"7a9c518f-3d77-4deb-b369-d23040ed89ce\" (UID: \"7a9c518f-3d77-4deb-b369-d23040ed89ce\") " Oct 28 05:18:53.649210 kubelet[2761]: I1028 05:18:53.649125 2761 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7a9c518f-3d77-4deb-b369-d23040ed89ce-whisker-backend-key-pair\") pod \"7a9c518f-3d77-4deb-b369-d23040ed89ce\" (UID: \"7a9c518f-3d77-4deb-b369-d23040ed89ce\") " Oct 28 05:18:53.649210 kubelet[2761]: I1028 05:18:53.649144 2761 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a9c518f-3d77-4deb-b369-d23040ed89ce-whisker-ca-bundle\") pod \"7a9c518f-3d77-4deb-b369-d23040ed89ce\" (UID: \"7a9c518f-3d77-4deb-b369-d23040ed89ce\") " Oct 28 05:18:53.659833 systemd[1]: var-lib-kubelet-pods-7a9c518f\x2d3d77\x2d4deb\x2db369\x2dd23040ed89ce-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn97g7.mount: Deactivated successfully. Oct 28 05:18:53.660253 kubelet[2761]: I1028 05:18:53.659920 2761 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a9c518f-3d77-4deb-b369-d23040ed89ce-kube-api-access-n97g7" (OuterVolumeSpecName: "kube-api-access-n97g7") pod "7a9c518f-3d77-4deb-b369-d23040ed89ce" (UID: "7a9c518f-3d77-4deb-b369-d23040ed89ce"). InnerVolumeSpecName "kube-api-access-n97g7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 28 05:18:53.667030 kubelet[2761]: I1028 05:18:53.666973 2761 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a9c518f-3d77-4deb-b369-d23040ed89ce-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7a9c518f-3d77-4deb-b369-d23040ed89ce" (UID: "7a9c518f-3d77-4deb-b369-d23040ed89ce"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 28 05:18:53.669178 kubelet[2761]: I1028 05:18:53.669078 2761 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9c518f-3d77-4deb-b369-d23040ed89ce-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7a9c518f-3d77-4deb-b369-d23040ed89ce" (UID: "7a9c518f-3d77-4deb-b369-d23040ed89ce"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 28 05:18:53.670920 systemd[1]: var-lib-kubelet-pods-7a9c518f\x2d3d77\x2d4deb\x2db369\x2dd23040ed89ce-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 28 05:18:53.750169 kubelet[2761]: I1028 05:18:53.750121 2761 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n97g7\" (UniqueName: \"kubernetes.io/projected/7a9c518f-3d77-4deb-b369-d23040ed89ce-kube-api-access-n97g7\") on node \"ci-4501.0.0-n-a8513f8a3e\" DevicePath \"\"" Oct 28 05:18:53.750169 kubelet[2761]: I1028 05:18:53.750161 2761 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7a9c518f-3d77-4deb-b369-d23040ed89ce-whisker-backend-key-pair\") on node \"ci-4501.0.0-n-a8513f8a3e\" DevicePath \"\"" Oct 28 05:18:53.750169 kubelet[2761]: I1028 05:18:53.750173 2761 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a9c518f-3d77-4deb-b369-d23040ed89ce-whisker-ca-bundle\") on node \"ci-4501.0.0-n-a8513f8a3e\" DevicePath \"\"" Oct 28 05:18:54.363713 kubelet[2761]: I1028 05:18:54.361034 2761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 28 05:18:54.363713 kubelet[2761]: E1028 05:18:54.362544 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:54.372323 systemd[1]: Removed slice kubepods-besteffort-pod7a9c518f_3d77_4deb_b369_d23040ed89ce.slice - libcontainer container kubepods-besteffort-pod7a9c518f_3d77_4deb_b369_d23040ed89ce.slice. Oct 28 05:18:54.484931 systemd[1]: Created slice kubepods-besteffort-poda4214a7a_e8c1_4291_b44c_d7574f9e0241.slice - libcontainer container kubepods-besteffort-poda4214a7a_e8c1_4291_b44c_d7574f9e0241.slice. Oct 28 05:18:54.556565 kubelet[2761]: I1028 05:18:54.556406 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kfhb\" (UniqueName: \"kubernetes.io/projected/a4214a7a-e8c1-4291-b44c-d7574f9e0241-kube-api-access-2kfhb\") pod \"whisker-64c845d8b5-xc7c2\" (UID: \"a4214a7a-e8c1-4291-b44c-d7574f9e0241\") " pod="calico-system/whisker-64c845d8b5-xc7c2" Oct 28 05:18:54.556967 kubelet[2761]: I1028 05:18:54.556928 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4214a7a-e8c1-4291-b44c-d7574f9e0241-whisker-ca-bundle\") pod \"whisker-64c845d8b5-xc7c2\" (UID: \"a4214a7a-e8c1-4291-b44c-d7574f9e0241\") " pod="calico-system/whisker-64c845d8b5-xc7c2" Oct 28 05:18:54.557155 kubelet[2761]: I1028 05:18:54.557130 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a4214a7a-e8c1-4291-b44c-d7574f9e0241-whisker-backend-key-pair\") pod \"whisker-64c845d8b5-xc7c2\" (UID: \"a4214a7a-e8c1-4291-b44c-d7574f9e0241\") " pod="calico-system/whisker-64c845d8b5-xc7c2" Oct 28 05:18:54.793572 containerd[1596]: time="2025-10-28T05:18:54.793497466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64c845d8b5-xc7c2,Uid:a4214a7a-e8c1-4291-b44c-d7574f9e0241,Namespace:calico-system,Attempt:0,}" Oct 28 05:18:55.070184 kubelet[2761]: I1028 05:18:55.069896 2761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a9c518f-3d77-4deb-b369-d23040ed89ce" path="/var/lib/kubelet/pods/7a9c518f-3d77-4deb-b369-d23040ed89ce/volumes" Oct 28 05:18:55.135279 systemd-networkd[1491]: cali1c66e841879: Link UP Oct 28 05:18:55.137392 systemd-networkd[1491]: cali1c66e841879: Gained carrier Oct 28 05:18:55.163872 containerd[1596]: 2025-10-28 05:18:54.827 [INFO][3926] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 28 05:18:55.163872 containerd[1596]: 2025-10-28 05:18:54.867 [INFO][3926] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4501.0.0--n--a8513f8a3e-k8s-whisker--64c845d8b5--xc7c2-eth0 whisker-64c845d8b5- calico-system a4214a7a-e8c1-4291-b44c-d7574f9e0241 926 0 2025-10-28 05:18:54 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:64c845d8b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4501.0.0-n-a8513f8a3e whisker-64c845d8b5-xc7c2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1c66e841879 [] [] }} ContainerID="389b299efffd5b1b8f2bd55b1a13d2c930047a4da962e9773b7bcc6afcec71a8" Namespace="calico-system" Pod="whisker-64c845d8b5-xc7c2" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-whisker--64c845d8b5--xc7c2-" Oct 28 05:18:55.163872 containerd[1596]: 2025-10-28 05:18:54.868 [INFO][3926] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="389b299efffd5b1b8f2bd55b1a13d2c930047a4da962e9773b7bcc6afcec71a8" Namespace="calico-system" Pod="whisker-64c845d8b5-xc7c2" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-whisker--64c845d8b5--xc7c2-eth0" Oct 28 05:18:55.163872 containerd[1596]: 2025-10-28 05:18:55.042 [INFO][3967] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="389b299efffd5b1b8f2bd55b1a13d2c930047a4da962e9773b7bcc6afcec71a8" HandleID="k8s-pod-network.389b299efffd5b1b8f2bd55b1a13d2c930047a4da962e9773b7bcc6afcec71a8" Workload="ci--4501.0.0--n--a8513f8a3e-k8s-whisker--64c845d8b5--xc7c2-eth0" Oct 28 05:18:55.166118 containerd[1596]: 2025-10-28 05:18:55.044 [INFO][3967] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="389b299efffd5b1b8f2bd55b1a13d2c930047a4da962e9773b7bcc6afcec71a8" HandleID="k8s-pod-network.389b299efffd5b1b8f2bd55b1a13d2c930047a4da962e9773b7bcc6afcec71a8" Workload="ci--4501.0.0--n--a8513f8a3e-k8s-whisker--64c845d8b5--xc7c2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003381b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4501.0.0-n-a8513f8a3e", "pod":"whisker-64c845d8b5-xc7c2", "timestamp":"2025-10-28 05:18:55.042704468 +0000 UTC"}, Hostname:"ci-4501.0.0-n-a8513f8a3e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 05:18:55.166118 containerd[1596]: 2025-10-28 05:18:55.044 [INFO][3967] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 05:18:55.166118 containerd[1596]: 2025-10-28 05:18:55.045 [INFO][3967] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 05:18:55.166118 containerd[1596]: 2025-10-28 05:18:55.045 [INFO][3967] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4501.0.0-n-a8513f8a3e' Oct 28 05:18:55.166118 containerd[1596]: 2025-10-28 05:18:55.063 [INFO][3967] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.389b299efffd5b1b8f2bd55b1a13d2c930047a4da962e9773b7bcc6afcec71a8" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:55.166118 containerd[1596]: 2025-10-28 05:18:55.078 [INFO][3967] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:55.166118 containerd[1596]: 2025-10-28 05:18:55.088 [INFO][3967] ipam/ipam.go 511: Trying affinity for 192.168.103.192/26 host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:55.166118 containerd[1596]: 2025-10-28 05:18:55.090 [INFO][3967] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.192/26 host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:55.166118 containerd[1596]: 2025-10-28 05:18:55.094 [INFO][3967] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:55.166437 containerd[1596]: 2025-10-28 05:18:55.094 [INFO][3967] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.389b299efffd5b1b8f2bd55b1a13d2c930047a4da962e9773b7bcc6afcec71a8" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:55.166437 containerd[1596]: 2025-10-28 05:18:55.096 [INFO][3967] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.389b299efffd5b1b8f2bd55b1a13d2c930047a4da962e9773b7bcc6afcec71a8 Oct 28 05:18:55.166437 containerd[1596]: 2025-10-28 05:18:55.101 [INFO][3967] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.389b299efffd5b1b8f2bd55b1a13d2c930047a4da962e9773b7bcc6afcec71a8" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:55.166437 containerd[1596]: 2025-10-28 05:18:55.106 [INFO][3967] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.193/26] block=192.168.103.192/26 handle="k8s-pod-network.389b299efffd5b1b8f2bd55b1a13d2c930047a4da962e9773b7bcc6afcec71a8" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:55.166437 containerd[1596]: 2025-10-28 05:18:55.106 [INFO][3967] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.193/26] handle="k8s-pod-network.389b299efffd5b1b8f2bd55b1a13d2c930047a4da962e9773b7bcc6afcec71a8" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:55.166437 containerd[1596]: 2025-10-28 05:18:55.106 [INFO][3967] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 05:18:55.166437 containerd[1596]: 2025-10-28 05:18:55.106 [INFO][3967] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.193/26] IPv6=[] ContainerID="389b299efffd5b1b8f2bd55b1a13d2c930047a4da962e9773b7bcc6afcec71a8" HandleID="k8s-pod-network.389b299efffd5b1b8f2bd55b1a13d2c930047a4da962e9773b7bcc6afcec71a8" Workload="ci--4501.0.0--n--a8513f8a3e-k8s-whisker--64c845d8b5--xc7c2-eth0" Oct 28 05:18:55.166623 containerd[1596]: 2025-10-28 05:18:55.110 [INFO][3926] cni-plugin/k8s.go 418: Populated endpoint ContainerID="389b299efffd5b1b8f2bd55b1a13d2c930047a4da962e9773b7bcc6afcec71a8" Namespace="calico-system" Pod="whisker-64c845d8b5-xc7c2" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-whisker--64c845d8b5--xc7c2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4501.0.0--n--a8513f8a3e-k8s-whisker--64c845d8b5--xc7c2-eth0", GenerateName:"whisker-64c845d8b5-", Namespace:"calico-system", SelfLink:"", UID:"a4214a7a-e8c1-4291-b44c-d7574f9e0241", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 18, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64c845d8b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4501.0.0-n-a8513f8a3e", ContainerID:"", Pod:"whisker-64c845d8b5-xc7c2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.103.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1c66e841879", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:18:55.166623 containerd[1596]: 2025-10-28 05:18:55.110 [INFO][3926] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.193/32] ContainerID="389b299efffd5b1b8f2bd55b1a13d2c930047a4da962e9773b7bcc6afcec71a8" Namespace="calico-system" Pod="whisker-64c845d8b5-xc7c2" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-whisker--64c845d8b5--xc7c2-eth0" Oct 28 05:18:55.169350 containerd[1596]: 2025-10-28 05:18:55.110 [INFO][3926] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c66e841879 ContainerID="389b299efffd5b1b8f2bd55b1a13d2c930047a4da962e9773b7bcc6afcec71a8" Namespace="calico-system" Pod="whisker-64c845d8b5-xc7c2" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-whisker--64c845d8b5--xc7c2-eth0" Oct 28 05:18:55.169350 containerd[1596]: 2025-10-28 05:18:55.128 [INFO][3926] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="389b299efffd5b1b8f2bd55b1a13d2c930047a4da962e9773b7bcc6afcec71a8" Namespace="calico-system" Pod="whisker-64c845d8b5-xc7c2" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-whisker--64c845d8b5--xc7c2-eth0" Oct 28 05:18:55.171847 containerd[1596]: 2025-10-28 05:18:55.129 [INFO][3926] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="389b299efffd5b1b8f2bd55b1a13d2c930047a4da962e9773b7bcc6afcec71a8" Namespace="calico-system" Pod="whisker-64c845d8b5-xc7c2" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-whisker--64c845d8b5--xc7c2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4501.0.0--n--a8513f8a3e-k8s-whisker--64c845d8b5--xc7c2-eth0", GenerateName:"whisker-64c845d8b5-", Namespace:"calico-system", SelfLink:"", UID:"a4214a7a-e8c1-4291-b44c-d7574f9e0241", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 18, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64c845d8b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4501.0.0-n-a8513f8a3e", ContainerID:"389b299efffd5b1b8f2bd55b1a13d2c930047a4da962e9773b7bcc6afcec71a8", Pod:"whisker-64c845d8b5-xc7c2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.103.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1c66e841879", MAC:"ee:ab:f8:5c:58:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:18:55.171976 containerd[1596]: 2025-10-28 05:18:55.152 [INFO][3926] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="389b299efffd5b1b8f2bd55b1a13d2c930047a4da962e9773b7bcc6afcec71a8" Namespace="calico-system" Pod="whisker-64c845d8b5-xc7c2" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-whisker--64c845d8b5--xc7c2-eth0" Oct 28 05:18:55.479083 containerd[1596]: time="2025-10-28T05:18:55.478943950Z" level=info msg="connecting to shim 389b299efffd5b1b8f2bd55b1a13d2c930047a4da962e9773b7bcc6afcec71a8" address="unix:///run/containerd/s/9708b4364450ed78dbca8fc312e90a227a0b4bcda606f8cbdf2f5726235a6fc6" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:18:55.527014 systemd[1]: Started cri-containerd-389b299efffd5b1b8f2bd55b1a13d2c930047a4da962e9773b7bcc6afcec71a8.scope - libcontainer container 389b299efffd5b1b8f2bd55b1a13d2c930047a4da962e9773b7bcc6afcec71a8. Oct 28 05:18:55.590410 containerd[1596]: time="2025-10-28T05:18:55.590354173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64c845d8b5-xc7c2,Uid:a4214a7a-e8c1-4291-b44c-d7574f9e0241,Namespace:calico-system,Attempt:0,} returns sandbox id \"389b299efffd5b1b8f2bd55b1a13d2c930047a4da962e9773b7bcc6afcec71a8\"" Oct 28 05:18:55.603011 containerd[1596]: time="2025-10-28T05:18:55.602949549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 28 05:18:55.956993 containerd[1596]: time="2025-10-28T05:18:55.956828778Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:18:55.965455 containerd[1596]: time="2025-10-28T05:18:55.958940981Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 28 05:18:55.965720 containerd[1596]: time="2025-10-28T05:18:55.960842294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 28 05:18:55.965850 kubelet[2761]: E1028 05:18:55.965810 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 05:18:55.966578 kubelet[2761]: E1028 05:18:55.965876 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 05:18:55.966578 kubelet[2761]: E1028 05:18:55.965991 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-64c845d8b5-xc7c2_calico-system(a4214a7a-e8c1-4291-b44c-d7574f9e0241): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 28 05:18:55.967413 containerd[1596]: time="2025-10-28T05:18:55.967344736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 28 05:18:56.229925 systemd-networkd[1491]: cali1c66e841879: Gained IPv6LL Oct 28 05:18:56.303046 containerd[1596]: time="2025-10-28T05:18:56.302951338Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:18:56.303968 containerd[1596]: time="2025-10-28T05:18:56.303864894Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 28 05:18:56.303968 containerd[1596]: time="2025-10-28T05:18:56.303917061Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 28 05:18:56.304475 kubelet[2761]: E1028 05:18:56.304384 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 05:18:56.304759 kubelet[2761]: E1028 05:18:56.304613 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 05:18:56.305017 kubelet[2761]: E1028 05:18:56.304968 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-64c845d8b5-xc7c2_calico-system(a4214a7a-e8c1-4291-b44c-d7574f9e0241): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 28 05:18:56.305393 kubelet[2761]: E1028 05:18:56.305335 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64c845d8b5-xc7c2" podUID="a4214a7a-e8c1-4291-b44c-d7574f9e0241" Oct 28 05:18:56.373416 kubelet[2761]: E1028 05:18:56.372948 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64c845d8b5-xc7c2" podUID="a4214a7a-e8c1-4291-b44c-d7574f9e0241" Oct 28 05:18:56.583780 kubelet[2761]: I1028 05:18:56.583109 2761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 28 05:18:56.583780 kubelet[2761]: E1028 05:18:56.583579 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:56.771081 containerd[1596]: time="2025-10-28T05:18:56.771032038Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4db498e89a2197db3158e2b4616e41b8baf8f99041d1c4c335b4618cddff042f\" id:\"80ccd6a15e9eb3ab70e4af126c6ce30dbfea073fb63848c87334edf701554d4a\" pid:4123 exit_status:1 exited_at:{seconds:1761628736 nanos:768295280}" Oct 28 05:18:56.881073 containerd[1596]: time="2025-10-28T05:18:56.880738087Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4db498e89a2197db3158e2b4616e41b8baf8f99041d1c4c335b4618cddff042f\" id:\"220a1fb450be047a4da52b4025124973251531dc5836fd234bcfa03150d570be\" pid:4146 exit_status:1 exited_at:{seconds:1761628736 nanos:880305748}" Oct 28 05:18:57.374157 kubelet[2761]: E1028 05:18:57.374064 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64c845d8b5-xc7c2" podUID="a4214a7a-e8c1-4291-b44c-d7574f9e0241" Oct 28 05:18:58.065710 containerd[1596]: time="2025-10-28T05:18:58.065634129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-2jltp,Uid:1c4104d9-7ba6-4171-8c43-b6c170ee1774,Namespace:calico-system,Attempt:0,}" Oct 28 05:18:58.066854 kubelet[2761]: E1028 05:18:58.066770 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:58.068285 containerd[1596]: time="2025-10-28T05:18:58.068222231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pj4dv,Uid:65eac80b-7114-46da-934f-c797b4afa603,Namespace:calico-system,Attempt:0,}" Oct 28 05:18:58.068672 containerd[1596]: time="2025-10-28T05:18:58.068567737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8fdqb,Uid:58f11e77-8c5d-47b6-8552-cf2d1e5f275c,Namespace:kube-system,Attempt:0,}" Oct 28 05:18:58.304422 systemd-networkd[1491]: cali21736d03539: Link UP Oct 28 05:18:58.305634 systemd-networkd[1491]: cali21736d03539: Gained carrier Oct 28 05:18:58.323732 containerd[1596]: 2025-10-28 05:18:58.136 [INFO][4198] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 28 05:18:58.323732 containerd[1596]: 2025-10-28 05:18:58.158 [INFO][4198] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4501.0.0--n--a8513f8a3e-k8s-coredns--66bc5c9577--8fdqb-eth0 coredns-66bc5c9577- kube-system 58f11e77-8c5d-47b6-8552-cf2d1e5f275c 853 0 2025-10-28 05:18:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4501.0.0-n-a8513f8a3e coredns-66bc5c9577-8fdqb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali21736d03539 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="0d81fdb1529517aadbb991686ecb803d91c710ed16d14157dbc6e89b3cc6ea70" Namespace="kube-system" Pod="coredns-66bc5c9577-8fdqb" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-coredns--66bc5c9577--8fdqb-" Oct 28 05:18:58.323732 containerd[1596]: 2025-10-28 05:18:58.159 [INFO][4198] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d81fdb1529517aadbb991686ecb803d91c710ed16d14157dbc6e89b3cc6ea70" Namespace="kube-system" Pod="coredns-66bc5c9577-8fdqb" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-coredns--66bc5c9577--8fdqb-eth0" Oct 28 05:18:58.323732 containerd[1596]: 2025-10-28 05:18:58.224 [INFO][4214] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d81fdb1529517aadbb991686ecb803d91c710ed16d14157dbc6e89b3cc6ea70" HandleID="k8s-pod-network.0d81fdb1529517aadbb991686ecb803d91c710ed16d14157dbc6e89b3cc6ea70" Workload="ci--4501.0.0--n--a8513f8a3e-k8s-coredns--66bc5c9577--8fdqb-eth0" Oct 28 05:18:58.324107 containerd[1596]: 2025-10-28 05:18:58.224 [INFO][4214] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0d81fdb1529517aadbb991686ecb803d91c710ed16d14157dbc6e89b3cc6ea70" HandleID="k8s-pod-network.0d81fdb1529517aadbb991686ecb803d91c710ed16d14157dbc6e89b3cc6ea70" Workload="ci--4501.0.0--n--a8513f8a3e-k8s-coredns--66bc5c9577--8fdqb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d57d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4501.0.0-n-a8513f8a3e", "pod":"coredns-66bc5c9577-8fdqb", "timestamp":"2025-10-28 05:18:58.224161499 +0000 UTC"}, Hostname:"ci-4501.0.0-n-a8513f8a3e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 05:18:58.324107 containerd[1596]: 2025-10-28 05:18:58.224 [INFO][4214] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 05:18:58.324107 containerd[1596]: 2025-10-28 05:18:58.224 [INFO][4214] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 05:18:58.324107 containerd[1596]: 2025-10-28 05:18:58.224 [INFO][4214] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4501.0.0-n-a8513f8a3e' Oct 28 05:18:58.324107 containerd[1596]: 2025-10-28 05:18:58.239 [INFO][4214] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0d81fdb1529517aadbb991686ecb803d91c710ed16d14157dbc6e89b3cc6ea70" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:58.324107 containerd[1596]: 2025-10-28 05:18:58.251 [INFO][4214] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:58.324107 containerd[1596]: 2025-10-28 05:18:58.263 [INFO][4214] ipam/ipam.go 511: Trying affinity for 192.168.103.192/26 host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:58.324107 containerd[1596]: 2025-10-28 05:18:58.269 [INFO][4214] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.192/26 host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:58.324107 containerd[1596]: 2025-10-28 05:18:58.276 [INFO][4214] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:58.324602 containerd[1596]: 2025-10-28 05:18:58.276 [INFO][4214] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.0d81fdb1529517aadbb991686ecb803d91c710ed16d14157dbc6e89b3cc6ea70" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:58.324602 containerd[1596]: 2025-10-28 05:18:58.278 [INFO][4214] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0d81fdb1529517aadbb991686ecb803d91c710ed16d14157dbc6e89b3cc6ea70 Oct 28 05:18:58.324602 containerd[1596]: 2025-10-28 05:18:58.285 [INFO][4214] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.0d81fdb1529517aadbb991686ecb803d91c710ed16d14157dbc6e89b3cc6ea70" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:58.324602 containerd[1596]: 2025-10-28 05:18:58.293 [INFO][4214] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.194/26] block=192.168.103.192/26 handle="k8s-pod-network.0d81fdb1529517aadbb991686ecb803d91c710ed16d14157dbc6e89b3cc6ea70" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:58.324602 containerd[1596]: 2025-10-28 05:18:58.293 [INFO][4214] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.194/26] handle="k8s-pod-network.0d81fdb1529517aadbb991686ecb803d91c710ed16d14157dbc6e89b3cc6ea70" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:58.324602 containerd[1596]: 2025-10-28 05:18:58.293 [INFO][4214] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 05:18:58.324602 containerd[1596]: 2025-10-28 05:18:58.294 [INFO][4214] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.194/26] IPv6=[] ContainerID="0d81fdb1529517aadbb991686ecb803d91c710ed16d14157dbc6e89b3cc6ea70" HandleID="k8s-pod-network.0d81fdb1529517aadbb991686ecb803d91c710ed16d14157dbc6e89b3cc6ea70" Workload="ci--4501.0.0--n--a8513f8a3e-k8s-coredns--66bc5c9577--8fdqb-eth0" Oct 28 05:18:58.328179 containerd[1596]: 2025-10-28 05:18:58.300 [INFO][4198] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d81fdb1529517aadbb991686ecb803d91c710ed16d14157dbc6e89b3cc6ea70" Namespace="kube-system" Pod="coredns-66bc5c9577-8fdqb" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-coredns--66bc5c9577--8fdqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4501.0.0--n--a8513f8a3e-k8s-coredns--66bc5c9577--8fdqb-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"58f11e77-8c5d-47b6-8552-cf2d1e5f275c", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 18, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4501.0.0-n-a8513f8a3e", ContainerID:"", Pod:"coredns-66bc5c9577-8fdqb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali21736d03539", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:18:58.328179 containerd[1596]: 2025-10-28 05:18:58.300 [INFO][4198] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.194/32] ContainerID="0d81fdb1529517aadbb991686ecb803d91c710ed16d14157dbc6e89b3cc6ea70" Namespace="kube-system" Pod="coredns-66bc5c9577-8fdqb" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-coredns--66bc5c9577--8fdqb-eth0" Oct 28 05:18:58.328179 containerd[1596]: 2025-10-28 05:18:58.300 [INFO][4198] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21736d03539 ContainerID="0d81fdb1529517aadbb991686ecb803d91c710ed16d14157dbc6e89b3cc6ea70" Namespace="kube-system" Pod="coredns-66bc5c9577-8fdqb" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-coredns--66bc5c9577--8fdqb-eth0" Oct 28 05:18:58.328179 containerd[1596]: 2025-10-28 05:18:58.306 [INFO][4198] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d81fdb1529517aadbb991686ecb803d91c710ed16d14157dbc6e89b3cc6ea70" Namespace="kube-system" Pod="coredns-66bc5c9577-8fdqb" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-coredns--66bc5c9577--8fdqb-eth0" Oct 28 05:18:58.328179 containerd[1596]: 2025-10-28 05:18:58.306 [INFO][4198] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d81fdb1529517aadbb991686ecb803d91c710ed16d14157dbc6e89b3cc6ea70" Namespace="kube-system" Pod="coredns-66bc5c9577-8fdqb" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-coredns--66bc5c9577--8fdqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4501.0.0--n--a8513f8a3e-k8s-coredns--66bc5c9577--8fdqb-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"58f11e77-8c5d-47b6-8552-cf2d1e5f275c", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 18, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4501.0.0-n-a8513f8a3e", ContainerID:"0d81fdb1529517aadbb991686ecb803d91c710ed16d14157dbc6e89b3cc6ea70", Pod:"coredns-66bc5c9577-8fdqb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali21736d03539", MAC:"2a:eb:86:12:b5:33", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:18:58.329380 containerd[1596]: 2025-10-28 05:18:58.318 [INFO][4198] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d81fdb1529517aadbb991686ecb803d91c710ed16d14157dbc6e89b3cc6ea70" Namespace="kube-system" Pod="coredns-66bc5c9577-8fdqb" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-coredns--66bc5c9577--8fdqb-eth0" Oct 28 05:18:58.380030 containerd[1596]: time="2025-10-28T05:18:58.379962789Z" level=info msg="connecting to shim 0d81fdb1529517aadbb991686ecb803d91c710ed16d14157dbc6e89b3cc6ea70" address="unix:///run/containerd/s/6bdb855ed800299349e820894c8dd3918d1324cfc196605e5c66a725af5cacb6" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:18:58.415970 systemd-networkd[1491]: cali6395a2a66f4: Link UP Oct 28 05:18:58.419545 systemd-networkd[1491]: cali6395a2a66f4: Gained carrier Oct 28 05:18:58.449065 systemd[1]: Started cri-containerd-0d81fdb1529517aadbb991686ecb803d91c710ed16d14157dbc6e89b3cc6ea70.scope - libcontainer container 0d81fdb1529517aadbb991686ecb803d91c710ed16d14157dbc6e89b3cc6ea70. Oct 28 05:18:58.465760 containerd[1596]: 2025-10-28 05:18:58.171 [INFO][4179] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 28 05:18:58.465760 containerd[1596]: 2025-10-28 05:18:58.201 [INFO][4179] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4501.0.0--n--a8513f8a3e-k8s-csi--node--driver--pj4dv-eth0 csi-node-driver- calico-system 65eac80b-7114-46da-934f-c797b4afa603 735 0 2025-10-28 05:18:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4501.0.0-n-a8513f8a3e csi-node-driver-pj4dv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6395a2a66f4 [] [] }} ContainerID="6ddd032eb5ecdad921289c0925a863ea03a2de63eb852eb74542f495b4656b35" Namespace="calico-system" Pod="csi-node-driver-pj4dv" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-csi--node--driver--pj4dv-" Oct 28 05:18:58.465760 containerd[1596]: 2025-10-28 05:18:58.201 [INFO][4179] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ddd032eb5ecdad921289c0925a863ea03a2de63eb852eb74542f495b4656b35" Namespace="calico-system" Pod="csi-node-driver-pj4dv" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-csi--node--driver--pj4dv-eth0" Oct 28 05:18:58.465760 containerd[1596]: 2025-10-28 05:18:58.272 [INFO][4221] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ddd032eb5ecdad921289c0925a863ea03a2de63eb852eb74542f495b4656b35" HandleID="k8s-pod-network.6ddd032eb5ecdad921289c0925a863ea03a2de63eb852eb74542f495b4656b35" Workload="ci--4501.0.0--n--a8513f8a3e-k8s-csi--node--driver--pj4dv-eth0" Oct 28 05:18:58.465760 containerd[1596]: 2025-10-28 05:18:58.272 [INFO][4221] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6ddd032eb5ecdad921289c0925a863ea03a2de63eb852eb74542f495b4656b35" HandleID="k8s-pod-network.6ddd032eb5ecdad921289c0925a863ea03a2de63eb852eb74542f495b4656b35" Workload="ci--4501.0.0--n--a8513f8a3e-k8s-csi--node--driver--pj4dv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f750), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4501.0.0-n-a8513f8a3e", "pod":"csi-node-driver-pj4dv", "timestamp":"2025-10-28 05:18:58.27234887 +0000 UTC"}, Hostname:"ci-4501.0.0-n-a8513f8a3e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 05:18:58.465760 containerd[1596]: 2025-10-28 05:18:58.272 [INFO][4221] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 05:18:58.465760 containerd[1596]: 2025-10-28 05:18:58.293 [INFO][4221] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 05:18:58.465760 containerd[1596]: 2025-10-28 05:18:58.294 [INFO][4221] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4501.0.0-n-a8513f8a3e' Oct 28 05:18:58.465760 containerd[1596]: 2025-10-28 05:18:58.338 [INFO][4221] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6ddd032eb5ecdad921289c0925a863ea03a2de63eb852eb74542f495b4656b35" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:58.465760 containerd[1596]: 2025-10-28 05:18:58.353 [INFO][4221] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:58.465760 containerd[1596]: 2025-10-28 05:18:58.361 [INFO][4221] ipam/ipam.go 511: Trying affinity for 192.168.103.192/26 host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:58.465760 containerd[1596]: 2025-10-28 05:18:58.365 [INFO][4221] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.192/26 host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:58.465760 containerd[1596]: 2025-10-28 05:18:58.370 [INFO][4221] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:58.465760 containerd[1596]: 2025-10-28 05:18:58.370 [INFO][4221] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.6ddd032eb5ecdad921289c0925a863ea03a2de63eb852eb74542f495b4656b35" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:58.465760 containerd[1596]: 2025-10-28 05:18:58.375 [INFO][4221] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6ddd032eb5ecdad921289c0925a863ea03a2de63eb852eb74542f495b4656b35 Oct 28 05:18:58.465760 containerd[1596]: 2025-10-28 05:18:58.384 [INFO][4221] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.6ddd032eb5ecdad921289c0925a863ea03a2de63eb852eb74542f495b4656b35" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:58.465760 containerd[1596]: 2025-10-28 05:18:58.400 [INFO][4221] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.195/26] block=192.168.103.192/26 handle="k8s-pod-network.6ddd032eb5ecdad921289c0925a863ea03a2de63eb852eb74542f495b4656b35" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:58.465760 containerd[1596]: 2025-10-28 05:18:58.400 [INFO][4221] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.195/26] handle="k8s-pod-network.6ddd032eb5ecdad921289c0925a863ea03a2de63eb852eb74542f495b4656b35" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:58.465760 containerd[1596]: 2025-10-28 05:18:58.401 [INFO][4221] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 05:18:58.465760 containerd[1596]: 2025-10-28 05:18:58.401 [INFO][4221] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.195/26] IPv6=[] ContainerID="6ddd032eb5ecdad921289c0925a863ea03a2de63eb852eb74542f495b4656b35" HandleID="k8s-pod-network.6ddd032eb5ecdad921289c0925a863ea03a2de63eb852eb74542f495b4656b35" Workload="ci--4501.0.0--n--a8513f8a3e-k8s-csi--node--driver--pj4dv-eth0" Oct 28 05:18:58.469151 containerd[1596]: 2025-10-28 05:18:58.406 [INFO][4179] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ddd032eb5ecdad921289c0925a863ea03a2de63eb852eb74542f495b4656b35" Namespace="calico-system" Pod="csi-node-driver-pj4dv" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-csi--node--driver--pj4dv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4501.0.0--n--a8513f8a3e-k8s-csi--node--driver--pj4dv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"65eac80b-7114-46da-934f-c797b4afa603", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 18, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4501.0.0-n-a8513f8a3e", ContainerID:"", Pod:"csi-node-driver-pj4dv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6395a2a66f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:18:58.469151 containerd[1596]: 2025-10-28 05:18:58.411 [INFO][4179] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.195/32] ContainerID="6ddd032eb5ecdad921289c0925a863ea03a2de63eb852eb74542f495b4656b35" Namespace="calico-system" Pod="csi-node-driver-pj4dv" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-csi--node--driver--pj4dv-eth0" Oct 28 05:18:58.469151 containerd[1596]: 2025-10-28 05:18:58.412 [INFO][4179] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6395a2a66f4 ContainerID="6ddd032eb5ecdad921289c0925a863ea03a2de63eb852eb74542f495b4656b35" Namespace="calico-system" Pod="csi-node-driver-pj4dv" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-csi--node--driver--pj4dv-eth0" Oct 28 05:18:58.469151 containerd[1596]: 2025-10-28 05:18:58.421 [INFO][4179] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ddd032eb5ecdad921289c0925a863ea03a2de63eb852eb74542f495b4656b35" Namespace="calico-system" Pod="csi-node-driver-pj4dv" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-csi--node--driver--pj4dv-eth0" Oct 28 05:18:58.469151 containerd[1596]: 2025-10-28 05:18:58.426 [INFO][4179] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ddd032eb5ecdad921289c0925a863ea03a2de63eb852eb74542f495b4656b35" Namespace="calico-system" Pod="csi-node-driver-pj4dv" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-csi--node--driver--pj4dv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4501.0.0--n--a8513f8a3e-k8s-csi--node--driver--pj4dv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"65eac80b-7114-46da-934f-c797b4afa603", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 18, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4501.0.0-n-a8513f8a3e", ContainerID:"6ddd032eb5ecdad921289c0925a863ea03a2de63eb852eb74542f495b4656b35", Pod:"csi-node-driver-pj4dv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6395a2a66f4", MAC:"06:91:80:95:5c:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:18:58.469151 containerd[1596]: 2025-10-28 05:18:58.453 [INFO][4179] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ddd032eb5ecdad921289c0925a863ea03a2de63eb852eb74542f495b4656b35" Namespace="calico-system" Pod="csi-node-driver-pj4dv" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-csi--node--driver--pj4dv-eth0" Oct 28 05:18:58.520029 containerd[1596]: time="2025-10-28T05:18:58.519924946Z" level=info msg="connecting to shim 6ddd032eb5ecdad921289c0925a863ea03a2de63eb852eb74542f495b4656b35" address="unix:///run/containerd/s/c90ef8a86fa9b310d65794f8482c7b301f08ad2bb82a4b96f09646dcc9829b94" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:18:58.548069 systemd-networkd[1491]: cali0fd17602373: Link UP Oct 28 05:18:58.550321 systemd-networkd[1491]: cali0fd17602373: Gained carrier Oct 28 05:18:58.588612 containerd[1596]: 2025-10-28 05:18:58.170 [INFO][4180] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 28 05:18:58.588612 containerd[1596]: 2025-10-28 05:18:58.200 [INFO][4180] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4501.0.0--n--a8513f8a3e-k8s-goldmane--7c778bb748--2jltp-eth0 goldmane-7c778bb748- calico-system 1c4104d9-7ba6-4171-8c43-b6c170ee1774 849 0 2025-10-28 05:18:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4501.0.0-n-a8513f8a3e goldmane-7c778bb748-2jltp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0fd17602373 [] [] }} ContainerID="22128d12d6756af4c7e592585bb92c5bc7f149925ec0d29bc0c6950d13b9cfd5" Namespace="calico-system" Pod="goldmane-7c778bb748-2jltp" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-goldmane--7c778bb748--2jltp-" Oct 28 05:18:58.588612 containerd[1596]: 2025-10-28 05:18:58.201 [INFO][4180] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="22128d12d6756af4c7e592585bb92c5bc7f149925ec0d29bc0c6950d13b9cfd5" Namespace="calico-system" Pod="goldmane-7c778bb748-2jltp" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-goldmane--7c778bb748--2jltp-eth0" Oct 28 05:18:58.588612 containerd[1596]: 2025-10-28 05:18:58.283 [INFO][4223] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="22128d12d6756af4c7e592585bb92c5bc7f149925ec0d29bc0c6950d13b9cfd5" HandleID="k8s-pod-network.22128d12d6756af4c7e592585bb92c5bc7f149925ec0d29bc0c6950d13b9cfd5" Workload="ci--4501.0.0--n--a8513f8a3e-k8s-goldmane--7c778bb748--2jltp-eth0" Oct 28 05:18:58.588612 containerd[1596]: 2025-10-28 05:18:58.283 [INFO][4223] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="22128d12d6756af4c7e592585bb92c5bc7f149925ec0d29bc0c6950d13b9cfd5" HandleID="k8s-pod-network.22128d12d6756af4c7e592585bb92c5bc7f149925ec0d29bc0c6950d13b9cfd5" Workload="ci--4501.0.0--n--a8513f8a3e-k8s-goldmane--7c778bb748--2jltp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5100), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4501.0.0-n-a8513f8a3e", "pod":"goldmane-7c778bb748-2jltp", "timestamp":"2025-10-28 05:18:58.283370603 +0000 UTC"}, Hostname:"ci-4501.0.0-n-a8513f8a3e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 05:18:58.588612 containerd[1596]: 2025-10-28 05:18:58.283 [INFO][4223] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 05:18:58.588612 containerd[1596]: 2025-10-28 05:18:58.401 [INFO][4223] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 05:18:58.588612 containerd[1596]: 2025-10-28 05:18:58.402 [INFO][4223] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4501.0.0-n-a8513f8a3e' Oct 28 05:18:58.588612 containerd[1596]: 2025-10-28 05:18:58.439 [INFO][4223] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.22128d12d6756af4c7e592585bb92c5bc7f149925ec0d29bc0c6950d13b9cfd5" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:58.588612 containerd[1596]: 2025-10-28 05:18:58.463 [INFO][4223] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:58.588612 containerd[1596]: 2025-10-28 05:18:58.482 [INFO][4223] ipam/ipam.go 511: Trying affinity for 192.168.103.192/26 host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:58.588612 containerd[1596]: 2025-10-28 05:18:58.486 [INFO][4223] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.192/26 host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:58.588612 containerd[1596]: 2025-10-28 05:18:58.494 [INFO][4223] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:58.588612 containerd[1596]: 2025-10-28 05:18:58.494 [INFO][4223] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.22128d12d6756af4c7e592585bb92c5bc7f149925ec0d29bc0c6950d13b9cfd5" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:58.588612 containerd[1596]: 2025-10-28 05:18:58.499 [INFO][4223] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.22128d12d6756af4c7e592585bb92c5bc7f149925ec0d29bc0c6950d13b9cfd5 Oct 28 05:18:58.588612 containerd[1596]: 2025-10-28 05:18:58.511 [INFO][4223] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.22128d12d6756af4c7e592585bb92c5bc7f149925ec0d29bc0c6950d13b9cfd5" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:58.588612 containerd[1596]: 2025-10-28 05:18:58.525 [INFO][4223] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.196/26] block=192.168.103.192/26 handle="k8s-pod-network.22128d12d6756af4c7e592585bb92c5bc7f149925ec0d29bc0c6950d13b9cfd5" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:58.588612 containerd[1596]: 2025-10-28 05:18:58.526 [INFO][4223] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.196/26] handle="k8s-pod-network.22128d12d6756af4c7e592585bb92c5bc7f149925ec0d29bc0c6950d13b9cfd5" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:18:58.588612 containerd[1596]: 2025-10-28 05:18:58.527 [INFO][4223] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 05:18:58.588612 containerd[1596]: 2025-10-28 05:18:58.527 [INFO][4223] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.196/26] IPv6=[] ContainerID="22128d12d6756af4c7e592585bb92c5bc7f149925ec0d29bc0c6950d13b9cfd5" HandleID="k8s-pod-network.22128d12d6756af4c7e592585bb92c5bc7f149925ec0d29bc0c6950d13b9cfd5" Workload="ci--4501.0.0--n--a8513f8a3e-k8s-goldmane--7c778bb748--2jltp-eth0" Oct 28 05:18:58.589808 containerd[1596]: 2025-10-28 05:18:58.537 [INFO][4180] cni-plugin/k8s.go 418: Populated endpoint ContainerID="22128d12d6756af4c7e592585bb92c5bc7f149925ec0d29bc0c6950d13b9cfd5" Namespace="calico-system" Pod="goldmane-7c778bb748-2jltp" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-goldmane--7c778bb748--2jltp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4501.0.0--n--a8513f8a3e-k8s-goldmane--7c778bb748--2jltp-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"1c4104d9-7ba6-4171-8c43-b6c170ee1774", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 18, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4501.0.0-n-a8513f8a3e", ContainerID:"", Pod:"goldmane-7c778bb748-2jltp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.103.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0fd17602373", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:18:58.589808 containerd[1596]: 2025-10-28 05:18:58.539 [INFO][4180] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.196/32] ContainerID="22128d12d6756af4c7e592585bb92c5bc7f149925ec0d29bc0c6950d13b9cfd5" Namespace="calico-system" Pod="goldmane-7c778bb748-2jltp" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-goldmane--7c778bb748--2jltp-eth0" Oct 28 05:18:58.589808 containerd[1596]: 2025-10-28 05:18:58.540 [INFO][4180] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0fd17602373 ContainerID="22128d12d6756af4c7e592585bb92c5bc7f149925ec0d29bc0c6950d13b9cfd5" Namespace="calico-system" Pod="goldmane-7c778bb748-2jltp" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-goldmane--7c778bb748--2jltp-eth0" Oct 28 05:18:58.589808 containerd[1596]: 2025-10-28 05:18:58.551 [INFO][4180] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="22128d12d6756af4c7e592585bb92c5bc7f149925ec0d29bc0c6950d13b9cfd5" Namespace="calico-system" Pod="goldmane-7c778bb748-2jltp" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-goldmane--7c778bb748--2jltp-eth0" Oct 28 05:18:58.589808 containerd[1596]: 2025-10-28 05:18:58.552 [INFO][4180] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="22128d12d6756af4c7e592585bb92c5bc7f149925ec0d29bc0c6950d13b9cfd5" Namespace="calico-system" Pod="goldmane-7c778bb748-2jltp" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-goldmane--7c778bb748--2jltp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4501.0.0--n--a8513f8a3e-k8s-goldmane--7c778bb748--2jltp-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"1c4104d9-7ba6-4171-8c43-b6c170ee1774", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 18, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4501.0.0-n-a8513f8a3e", ContainerID:"22128d12d6756af4c7e592585bb92c5bc7f149925ec0d29bc0c6950d13b9cfd5", Pod:"goldmane-7c778bb748-2jltp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.103.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0fd17602373", MAC:"c2:98:15:8e:e1:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:18:58.589808 containerd[1596]: 2025-10-28 05:18:58.566 [INFO][4180] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="22128d12d6756af4c7e592585bb92c5bc7f149925ec0d29bc0c6950d13b9cfd5" Namespace="calico-system" Pod="goldmane-7c778bb748-2jltp" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-goldmane--7c778bb748--2jltp-eth0" Oct 28 05:18:58.624284 containerd[1596]: time="2025-10-28T05:18:58.623789651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8fdqb,Uid:58f11e77-8c5d-47b6-8552-cf2d1e5f275c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d81fdb1529517aadbb991686ecb803d91c710ed16d14157dbc6e89b3cc6ea70\"" Oct 28 05:18:58.625490 kubelet[2761]: E1028 05:18:58.625459 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:58.635208 containerd[1596]: time="2025-10-28T05:18:58.635128242Z" level=info msg="CreateContainer within sandbox \"0d81fdb1529517aadbb991686ecb803d91c710ed16d14157dbc6e89b3cc6ea70\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 28 05:18:58.673879 systemd[1]: Started cri-containerd-6ddd032eb5ecdad921289c0925a863ea03a2de63eb852eb74542f495b4656b35.scope - libcontainer container 6ddd032eb5ecdad921289c0925a863ea03a2de63eb852eb74542f495b4656b35. Oct 28 05:18:58.693161 containerd[1596]: time="2025-10-28T05:18:58.693104443Z" level=info msg="connecting to shim 22128d12d6756af4c7e592585bb92c5bc7f149925ec0d29bc0c6950d13b9cfd5" address="unix:///run/containerd/s/6c7b1655ad48c2c49f2f14f58a06e9c91e4cf244f79095a1b0f8aeec2e2f757c" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:18:58.711963 containerd[1596]: time="2025-10-28T05:18:58.711904731Z" level=info msg="Container ff55a722718cae1ee68fa8e75d7d9fbb1e42a7644f91133a361a30b02737144a: CDI devices from CRI Config.CDIDevices: []" Oct 28 05:18:58.733380 containerd[1596]: time="2025-10-28T05:18:58.733323918Z" level=info msg="CreateContainer within sandbox \"0d81fdb1529517aadbb991686ecb803d91c710ed16d14157dbc6e89b3cc6ea70\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ff55a722718cae1ee68fa8e75d7d9fbb1e42a7644f91133a361a30b02737144a\"" Oct 28 05:18:58.734440 containerd[1596]: time="2025-10-28T05:18:58.734358558Z" level=info msg="StartContainer for \"ff55a722718cae1ee68fa8e75d7d9fbb1e42a7644f91133a361a30b02737144a\"" Oct 28 05:18:58.735867 systemd[1]: Started cri-containerd-22128d12d6756af4c7e592585bb92c5bc7f149925ec0d29bc0c6950d13b9cfd5.scope - libcontainer container 22128d12d6756af4c7e592585bb92c5bc7f149925ec0d29bc0c6950d13b9cfd5. Oct 28 05:18:58.739980 containerd[1596]: time="2025-10-28T05:18:58.739856180Z" level=info msg="connecting to shim ff55a722718cae1ee68fa8e75d7d9fbb1e42a7644f91133a361a30b02737144a" address="unix:///run/containerd/s/6bdb855ed800299349e820894c8dd3918d1324cfc196605e5c66a725af5cacb6" protocol=ttrpc version=3 Oct 28 05:18:58.776882 systemd[1]: Started cri-containerd-ff55a722718cae1ee68fa8e75d7d9fbb1e42a7644f91133a361a30b02737144a.scope - libcontainer container ff55a722718cae1ee68fa8e75d7d9fbb1e42a7644f91133a361a30b02737144a. Oct 28 05:18:58.898213 containerd[1596]: time="2025-10-28T05:18:58.897841167Z" level=info msg="StartContainer for \"ff55a722718cae1ee68fa8e75d7d9fbb1e42a7644f91133a361a30b02737144a\" returns successfully" Oct 28 05:18:58.963942 containerd[1596]: time="2025-10-28T05:18:58.963707816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pj4dv,Uid:65eac80b-7114-46da-934f-c797b4afa603,Namespace:calico-system,Attempt:0,} returns sandbox id \"6ddd032eb5ecdad921289c0925a863ea03a2de63eb852eb74542f495b4656b35\"" Oct 28 05:18:58.971299 containerd[1596]: time="2025-10-28T05:18:58.971179439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 28 05:18:59.095416 containerd[1596]: time="2025-10-28T05:18:59.095180314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-2jltp,Uid:1c4104d9-7ba6-4171-8c43-b6c170ee1774,Namespace:calico-system,Attempt:0,} returns sandbox id \"22128d12d6756af4c7e592585bb92c5bc7f149925ec0d29bc0c6950d13b9cfd5\"" Oct 28 05:18:59.286618 containerd[1596]: time="2025-10-28T05:18:59.286553632Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:18:59.288399 containerd[1596]: time="2025-10-28T05:18:59.288292360Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 28 05:18:59.288399 containerd[1596]: time="2025-10-28T05:18:59.288362201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 28 05:18:59.288866 kubelet[2761]: E1028 05:18:59.288822 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 05:18:59.289019 kubelet[2761]: E1028 05:18:59.288876 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 05:18:59.289470 kubelet[2761]: E1028 05:18:59.289069 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-pj4dv_calico-system(65eac80b-7114-46da-934f-c797b4afa603): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 28 05:18:59.290384 containerd[1596]: time="2025-10-28T05:18:59.290350068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 28 05:18:59.384701 kubelet[2761]: E1028 05:18:59.384661 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:18:59.427936 kubelet[2761]: I1028 05:18:59.427807 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8fdqb" podStartSLOduration=43.427764509 podStartE2EDuration="43.427764509s" podCreationTimestamp="2025-10-28 05:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 05:18:59.404278312 +0000 UTC m=+50.533279206" watchObservedRunningTime="2025-10-28 05:18:59.427764509 +0000 UTC m=+50.556765386" Oct 28 05:18:59.600885 containerd[1596]: time="2025-10-28T05:18:59.600718673Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:18:59.601952 containerd[1596]: time="2025-10-28T05:18:59.601806920Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 28 05:18:59.602136 containerd[1596]: time="2025-10-28T05:18:59.601898232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 28 05:18:59.602568 kubelet[2761]: E1028 05:18:59.602530 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 05:18:59.602934 kubelet[2761]: E1028 05:18:59.602581 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 05:18:59.602934 kubelet[2761]: E1028 05:18:59.602827 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-2jltp_calico-system(1c4104d9-7ba6-4171-8c43-b6c170ee1774): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 28 05:18:59.603036 kubelet[2761]: E1028 05:18:59.602965 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2jltp" podUID="1c4104d9-7ba6-4171-8c43-b6c170ee1774" Oct 28 05:18:59.604503 containerd[1596]: time="2025-10-28T05:18:59.604465314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 28 05:18:59.916217 containerd[1596]: time="2025-10-28T05:18:59.916055058Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:18:59.917313 containerd[1596]: time="2025-10-28T05:18:59.917175346Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 28 05:18:59.917313 containerd[1596]: time="2025-10-28T05:18:59.917277282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 28 05:18:59.917580 kubelet[2761]: E1028 05:18:59.917518 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 05:18:59.919011 kubelet[2761]: E1028 05:18:59.917591 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 05:18:59.919011 kubelet[2761]: E1028 05:18:59.917710 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-pj4dv_calico-system(65eac80b-7114-46da-934f-c797b4afa603): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 28 05:18:59.919011 kubelet[2761]: E1028 05:18:59.917761 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pj4dv" podUID="65eac80b-7114-46da-934f-c797b4afa603" Oct 28 05:19:00.065039 kubelet[2761]: E1028 05:19:00.064990 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:19:00.066773 containerd[1596]: time="2025-10-28T05:19:00.066692204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-z42gp,Uid:10a83b40-012d-4ade-8ade-595315f777ca,Namespace:kube-system,Attempt:0,}" Oct 28 05:19:00.069804 containerd[1596]: time="2025-10-28T05:19:00.069273473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d779dd4f-74h24,Uid:2dffcfa9-84ba-4077-b168-bc5f5cb035a9,Namespace:calico-apiserver,Attempt:0,}" Oct 28 05:19:00.074917 containerd[1596]: time="2025-10-28T05:19:00.073012914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4bfd986d-xlpx8,Uid:3e214f89-6d8a-48a0-9125-ff9ee356160a,Namespace:calico-system,Attempt:0,}" Oct 28 05:19:00.197861 systemd-networkd[1491]: cali21736d03539: Gained IPv6LL Oct 28 05:19:00.261966 systemd-networkd[1491]: cali0fd17602373: Gained IPv6LL Oct 28 05:19:00.389840 systemd-networkd[1491]: cali6395a2a66f4: Gained IPv6LL Oct 28 05:19:00.402189 kubelet[2761]: E1028 05:19:00.402141 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:19:00.405164 kubelet[2761]: E1028 05:19:00.405090 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2jltp" podUID="1c4104d9-7ba6-4171-8c43-b6c170ee1774" Oct 28 05:19:00.407887 kubelet[2761]: E1028 05:19:00.407831 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pj4dv" podUID="65eac80b-7114-46da-934f-c797b4afa603" Oct 28 05:19:00.490981 systemd-networkd[1491]: cali9e204a2cfea: Link UP Oct 28 05:19:00.491676 systemd-networkd[1491]: cali9e204a2cfea: Gained carrier Oct 28 05:19:00.529905 containerd[1596]: 2025-10-28 05:19:00.197 [INFO][4451] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 28 05:19:00.529905 containerd[1596]: 2025-10-28 05:19:00.247 [INFO][4451] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4501.0.0--n--a8513f8a3e-k8s-calico--apiserver--85d779dd4f--74h24-eth0 calico-apiserver-85d779dd4f- calico-apiserver 2dffcfa9-84ba-4077-b168-bc5f5cb035a9 850 0 2025-10-28 05:18:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85d779dd4f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4501.0.0-n-a8513f8a3e calico-apiserver-85d779dd4f-74h24 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9e204a2cfea [] [] }} ContainerID="e1987f42ec1e9d2ba5cb43438c2ef331b0ca9e0a5fa1955d2e3da66d55d375c9" Namespace="calico-apiserver" Pod="calico-apiserver-85d779dd4f-74h24" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-calico--apiserver--85d779dd4f--74h24-" Oct 28 05:19:00.529905 containerd[1596]: 2025-10-28 05:19:00.247 [INFO][4451] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e1987f42ec1e9d2ba5cb43438c2ef331b0ca9e0a5fa1955d2e3da66d55d375c9" Namespace="calico-apiserver" Pod="calico-apiserver-85d779dd4f-74h24" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-calico--apiserver--85d779dd4f--74h24-eth0" Oct 28 05:19:00.529905 containerd[1596]: 2025-10-28 05:19:00.360 [INFO][4492] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1987f42ec1e9d2ba5cb43438c2ef331b0ca9e0a5fa1955d2e3da66d55d375c9" HandleID="k8s-pod-network.e1987f42ec1e9d2ba5cb43438c2ef331b0ca9e0a5fa1955d2e3da66d55d375c9" Workload="ci--4501.0.0--n--a8513f8a3e-k8s-calico--apiserver--85d779dd4f--74h24-eth0" Oct 28 05:19:00.529905 containerd[1596]: 2025-10-28 05:19:00.363 [INFO][4492] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e1987f42ec1e9d2ba5cb43438c2ef331b0ca9e0a5fa1955d2e3da66d55d375c9" HandleID="k8s-pod-network.e1987f42ec1e9d2ba5cb43438c2ef331b0ca9e0a5fa1955d2e3da66d55d375c9" Workload="ci--4501.0.0--n--a8513f8a3e-k8s-calico--apiserver--85d779dd4f--74h24-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e150), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4501.0.0-n-a8513f8a3e", "pod":"calico-apiserver-85d779dd4f-74h24", "timestamp":"2025-10-28 05:19:00.360053179 +0000 UTC"}, Hostname:"ci-4501.0.0-n-a8513f8a3e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 05:19:00.529905 containerd[1596]: 2025-10-28 05:19:00.363 [INFO][4492] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 05:19:00.529905 containerd[1596]: 2025-10-28 05:19:00.363 [INFO][4492] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 05:19:00.529905 containerd[1596]: 2025-10-28 05:19:00.363 [INFO][4492] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4501.0.0-n-a8513f8a3e' Oct 28 05:19:00.529905 containerd[1596]: 2025-10-28 05:19:00.379 [INFO][4492] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e1987f42ec1e9d2ba5cb43438c2ef331b0ca9e0a5fa1955d2e3da66d55d375c9" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:00.529905 containerd[1596]: 2025-10-28 05:19:00.404 [INFO][4492] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:00.529905 containerd[1596]: 2025-10-28 05:19:00.429 [INFO][4492] ipam/ipam.go 511: Trying affinity for 192.168.103.192/26 host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:00.529905 containerd[1596]: 2025-10-28 05:19:00.439 [INFO][4492] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.192/26 host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:00.529905 containerd[1596]: 2025-10-28 05:19:00.445 [INFO][4492] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:00.529905 containerd[1596]: 2025-10-28 05:19:00.445 [INFO][4492] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.e1987f42ec1e9d2ba5cb43438c2ef331b0ca9e0a5fa1955d2e3da66d55d375c9" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:00.529905 containerd[1596]: 2025-10-28 05:19:00.450 [INFO][4492] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e1987f42ec1e9d2ba5cb43438c2ef331b0ca9e0a5fa1955d2e3da66d55d375c9 Oct 28 05:19:00.529905 containerd[1596]: 2025-10-28 05:19:00.459 [INFO][4492] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.e1987f42ec1e9d2ba5cb43438c2ef331b0ca9e0a5fa1955d2e3da66d55d375c9" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:00.529905 containerd[1596]: 2025-10-28 05:19:00.468 [INFO][4492] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.197/26] block=192.168.103.192/26 handle="k8s-pod-network.e1987f42ec1e9d2ba5cb43438c2ef331b0ca9e0a5fa1955d2e3da66d55d375c9" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:00.529905 containerd[1596]: 2025-10-28 05:19:00.468 [INFO][4492] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.197/26] handle="k8s-pod-network.e1987f42ec1e9d2ba5cb43438c2ef331b0ca9e0a5fa1955d2e3da66d55d375c9" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:00.529905 containerd[1596]: 2025-10-28 05:19:00.468 [INFO][4492] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 05:19:00.529905 containerd[1596]: 2025-10-28 05:19:00.468 [INFO][4492] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.197/26] IPv6=[] ContainerID="e1987f42ec1e9d2ba5cb43438c2ef331b0ca9e0a5fa1955d2e3da66d55d375c9" HandleID="k8s-pod-network.e1987f42ec1e9d2ba5cb43438c2ef331b0ca9e0a5fa1955d2e3da66d55d375c9" Workload="ci--4501.0.0--n--a8513f8a3e-k8s-calico--apiserver--85d779dd4f--74h24-eth0" Oct 28 05:19:00.533328 containerd[1596]: 2025-10-28 05:19:00.474 [INFO][4451] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e1987f42ec1e9d2ba5cb43438c2ef331b0ca9e0a5fa1955d2e3da66d55d375c9" Namespace="calico-apiserver" Pod="calico-apiserver-85d779dd4f-74h24" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-calico--apiserver--85d779dd4f--74h24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4501.0.0--n--a8513f8a3e-k8s-calico--apiserver--85d779dd4f--74h24-eth0", GenerateName:"calico-apiserver-85d779dd4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"2dffcfa9-84ba-4077-b168-bc5f5cb035a9", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 18, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85d779dd4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4501.0.0-n-a8513f8a3e", ContainerID:"", Pod:"calico-apiserver-85d779dd4f-74h24", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e204a2cfea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:19:00.533328 containerd[1596]: 2025-10-28 05:19:00.474 [INFO][4451] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.197/32] ContainerID="e1987f42ec1e9d2ba5cb43438c2ef331b0ca9e0a5fa1955d2e3da66d55d375c9" Namespace="calico-apiserver" Pod="calico-apiserver-85d779dd4f-74h24" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-calico--apiserver--85d779dd4f--74h24-eth0" Oct 28 05:19:00.533328 containerd[1596]: 2025-10-28 05:19:00.474 [INFO][4451] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e204a2cfea ContainerID="e1987f42ec1e9d2ba5cb43438c2ef331b0ca9e0a5fa1955d2e3da66d55d375c9" Namespace="calico-apiserver" Pod="calico-apiserver-85d779dd4f-74h24" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-calico--apiserver--85d779dd4f--74h24-eth0" Oct 28 05:19:00.533328 containerd[1596]: 2025-10-28 05:19:00.497 [INFO][4451] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1987f42ec1e9d2ba5cb43438c2ef331b0ca9e0a5fa1955d2e3da66d55d375c9" Namespace="calico-apiserver" Pod="calico-apiserver-85d779dd4f-74h24" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-calico--apiserver--85d779dd4f--74h24-eth0" Oct 28 05:19:00.533328 containerd[1596]: 2025-10-28 05:19:00.501 [INFO][4451] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e1987f42ec1e9d2ba5cb43438c2ef331b0ca9e0a5fa1955d2e3da66d55d375c9" Namespace="calico-apiserver" Pod="calico-apiserver-85d779dd4f-74h24" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-calico--apiserver--85d779dd4f--74h24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4501.0.0--n--a8513f8a3e-k8s-calico--apiserver--85d779dd4f--74h24-eth0", GenerateName:"calico-apiserver-85d779dd4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"2dffcfa9-84ba-4077-b168-bc5f5cb035a9", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 18, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85d779dd4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4501.0.0-n-a8513f8a3e", ContainerID:"e1987f42ec1e9d2ba5cb43438c2ef331b0ca9e0a5fa1955d2e3da66d55d375c9", Pod:"calico-apiserver-85d779dd4f-74h24", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e204a2cfea", MAC:"26:69:fe:0f:d9:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:19:00.533328 containerd[1596]: 2025-10-28 05:19:00.521 [INFO][4451] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e1987f42ec1e9d2ba5cb43438c2ef331b0ca9e0a5fa1955d2e3da66d55d375c9" Namespace="calico-apiserver" Pod="calico-apiserver-85d779dd4f-74h24" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-calico--apiserver--85d779dd4f--74h24-eth0" Oct 28 05:19:00.597988 containerd[1596]: time="2025-10-28T05:19:00.597890901Z" level=info msg="connecting to shim e1987f42ec1e9d2ba5cb43438c2ef331b0ca9e0a5fa1955d2e3da66d55d375c9" address="unix:///run/containerd/s/73164fdb750b81723e011079a36f0fde6c7924a27fd284e724af378d907b59bb" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:19:00.646915 systemd-networkd[1491]: calif837d7c6c0c: Link UP Oct 28 05:19:00.649484 systemd-networkd[1491]: calif837d7c6c0c: Gained carrier Oct 28 05:19:00.679112 systemd[1]: Started cri-containerd-e1987f42ec1e9d2ba5cb43438c2ef331b0ca9e0a5fa1955d2e3da66d55d375c9.scope - libcontainer container e1987f42ec1e9d2ba5cb43438c2ef331b0ca9e0a5fa1955d2e3da66d55d375c9. Oct 28 05:19:00.719913 containerd[1596]: 2025-10-28 05:19:00.191 [INFO][4449] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 28 05:19:00.719913 containerd[1596]: 2025-10-28 05:19:00.252 [INFO][4449] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4501.0.0--n--a8513f8a3e-k8s-coredns--66bc5c9577--z42gp-eth0 coredns-66bc5c9577- kube-system 10a83b40-012d-4ade-8ade-595315f777ca 843 0 2025-10-28 05:18:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4501.0.0-n-a8513f8a3e coredns-66bc5c9577-z42gp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif837d7c6c0c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="24553247cd0618b781f2f439eb26ec96fac7279b82ca39291cdbe5d155c3586b" Namespace="kube-system" Pod="coredns-66bc5c9577-z42gp" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-coredns--66bc5c9577--z42gp-" Oct 28 05:19:00.719913 containerd[1596]: 2025-10-28 05:19:00.260 [INFO][4449] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="24553247cd0618b781f2f439eb26ec96fac7279b82ca39291cdbe5d155c3586b" Namespace="kube-system" Pod="coredns-66bc5c9577-z42gp" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-coredns--66bc5c9577--z42gp-eth0" Oct 28 05:19:00.719913 containerd[1596]: 2025-10-28 05:19:00.362 [INFO][4497] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24553247cd0618b781f2f439eb26ec96fac7279b82ca39291cdbe5d155c3586b" HandleID="k8s-pod-network.24553247cd0618b781f2f439eb26ec96fac7279b82ca39291cdbe5d155c3586b" Workload="ci--4501.0.0--n--a8513f8a3e-k8s-coredns--66bc5c9577--z42gp-eth0" Oct 28 05:19:00.719913 containerd[1596]: 2025-10-28 05:19:00.364 [INFO][4497] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="24553247cd0618b781f2f439eb26ec96fac7279b82ca39291cdbe5d155c3586b" HandleID="k8s-pod-network.24553247cd0618b781f2f439eb26ec96fac7279b82ca39291cdbe5d155c3586b" Workload="ci--4501.0.0--n--a8513f8a3e-k8s-coredns--66bc5c9577--z42gp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000285920), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4501.0.0-n-a8513f8a3e", "pod":"coredns-66bc5c9577-z42gp", "timestamp":"2025-10-28 05:19:00.362690442 +0000 UTC"}, Hostname:"ci-4501.0.0-n-a8513f8a3e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 05:19:00.719913 containerd[1596]: 2025-10-28 05:19:00.365 [INFO][4497] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 05:19:00.719913 containerd[1596]: 2025-10-28 05:19:00.468 [INFO][4497] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 05:19:00.719913 containerd[1596]: 2025-10-28 05:19:00.469 [INFO][4497] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4501.0.0-n-a8513f8a3e' Oct 28 05:19:00.719913 containerd[1596]: 2025-10-28 05:19:00.504 [INFO][4497] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.24553247cd0618b781f2f439eb26ec96fac7279b82ca39291cdbe5d155c3586b" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:00.719913 containerd[1596]: 2025-10-28 05:19:00.534 [INFO][4497] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:00.719913 containerd[1596]: 2025-10-28 05:19:00.546 [INFO][4497] ipam/ipam.go 511: Trying affinity for 192.168.103.192/26 host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:00.719913 containerd[1596]: 2025-10-28 05:19:00.551 [INFO][4497] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.192/26 host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:00.719913 containerd[1596]: 2025-10-28 05:19:00.556 [INFO][4497] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:00.719913 containerd[1596]: 2025-10-28 05:19:00.561 [INFO][4497] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.24553247cd0618b781f2f439eb26ec96fac7279b82ca39291cdbe5d155c3586b" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:00.719913 containerd[1596]: 2025-10-28 05:19:00.582 [INFO][4497] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.24553247cd0618b781f2f439eb26ec96fac7279b82ca39291cdbe5d155c3586b Oct 28 05:19:00.719913 containerd[1596]: 2025-10-28 05:19:00.601 [INFO][4497] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.24553247cd0618b781f2f439eb26ec96fac7279b82ca39291cdbe5d155c3586b" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:00.719913 containerd[1596]: 2025-10-28 05:19:00.619 [INFO][4497] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.198/26] block=192.168.103.192/26 handle="k8s-pod-network.24553247cd0618b781f2f439eb26ec96fac7279b82ca39291cdbe5d155c3586b" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:00.719913 containerd[1596]: 2025-10-28 05:19:00.620 [INFO][4497] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.198/26] handle="k8s-pod-network.24553247cd0618b781f2f439eb26ec96fac7279b82ca39291cdbe5d155c3586b" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:00.719913 containerd[1596]: 2025-10-28 05:19:00.620 [INFO][4497] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 05:19:00.719913 containerd[1596]: 2025-10-28 05:19:00.621 [INFO][4497] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.198/26] IPv6=[] ContainerID="24553247cd0618b781f2f439eb26ec96fac7279b82ca39291cdbe5d155c3586b" HandleID="k8s-pod-network.24553247cd0618b781f2f439eb26ec96fac7279b82ca39291cdbe5d155c3586b" Workload="ci--4501.0.0--n--a8513f8a3e-k8s-coredns--66bc5c9577--z42gp-eth0" Oct 28 05:19:00.721821 containerd[1596]: 2025-10-28 05:19:00.635 [INFO][4449] cni-plugin/k8s.go 418: Populated endpoint ContainerID="24553247cd0618b781f2f439eb26ec96fac7279b82ca39291cdbe5d155c3586b" Namespace="kube-system" Pod="coredns-66bc5c9577-z42gp" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-coredns--66bc5c9577--z42gp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4501.0.0--n--a8513f8a3e-k8s-coredns--66bc5c9577--z42gp-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"10a83b40-012d-4ade-8ade-595315f777ca", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 18, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4501.0.0-n-a8513f8a3e", ContainerID:"", Pod:"coredns-66bc5c9577-z42gp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif837d7c6c0c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:19:00.721821 containerd[1596]: 2025-10-28 05:19:00.637 [INFO][4449] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.198/32] ContainerID="24553247cd0618b781f2f439eb26ec96fac7279b82ca39291cdbe5d155c3586b" Namespace="kube-system" Pod="coredns-66bc5c9577-z42gp" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-coredns--66bc5c9577--z42gp-eth0" Oct 28 05:19:00.721821 containerd[1596]: 2025-10-28 05:19:00.637 [INFO][4449] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif837d7c6c0c ContainerID="24553247cd0618b781f2f439eb26ec96fac7279b82ca39291cdbe5d155c3586b" Namespace="kube-system" Pod="coredns-66bc5c9577-z42gp" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-coredns--66bc5c9577--z42gp-eth0" Oct 28 05:19:00.721821 containerd[1596]: 2025-10-28 05:19:00.662 [INFO][4449] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24553247cd0618b781f2f439eb26ec96fac7279b82ca39291cdbe5d155c3586b" Namespace="kube-system" Pod="coredns-66bc5c9577-z42gp" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-coredns--66bc5c9577--z42gp-eth0" Oct 28 05:19:00.721821 containerd[1596]: 2025-10-28 05:19:00.681 [INFO][4449] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="24553247cd0618b781f2f439eb26ec96fac7279b82ca39291cdbe5d155c3586b" Namespace="kube-system" Pod="coredns-66bc5c9577-z42gp" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-coredns--66bc5c9577--z42gp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4501.0.0--n--a8513f8a3e-k8s-coredns--66bc5c9577--z42gp-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"10a83b40-012d-4ade-8ade-595315f777ca", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 18, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4501.0.0-n-a8513f8a3e", ContainerID:"24553247cd0618b781f2f439eb26ec96fac7279b82ca39291cdbe5d155c3586b", Pod:"coredns-66bc5c9577-z42gp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif837d7c6c0c", MAC:"4e:57:cf:84:8c:a2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:19:00.722243 containerd[1596]: 2025-10-28 05:19:00.712 [INFO][4449] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="24553247cd0618b781f2f439eb26ec96fac7279b82ca39291cdbe5d155c3586b" Namespace="kube-system" Pod="coredns-66bc5c9577-z42gp" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-coredns--66bc5c9577--z42gp-eth0" Oct 28 05:19:00.769148 containerd[1596]: time="2025-10-28T05:19:00.768978566Z" level=info msg="connecting to shim 24553247cd0618b781f2f439eb26ec96fac7279b82ca39291cdbe5d155c3586b" address="unix:///run/containerd/s/e0a9b6d67824e24ec00753264cf369c53318c3faa7b246fec3351ac596e22699" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:19:00.783969 systemd-networkd[1491]: cali4258ab33f06: Link UP Oct 28 05:19:00.789749 systemd-networkd[1491]: cali4258ab33f06: Gained carrier Oct 28 05:19:00.834870 containerd[1596]: 2025-10-28 05:19:00.217 [INFO][4469] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 28 05:19:00.834870 containerd[1596]: 2025-10-28 05:19:00.274 [INFO][4469] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4501.0.0--n--a8513f8a3e-k8s-calico--kube--controllers--c4bfd986d--xlpx8-eth0 calico-kube-controllers-c4bfd986d- calico-system 3e214f89-6d8a-48a0-9125-ff9ee356160a 847 0 2025-10-28 05:18:36 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c4bfd986d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4501.0.0-n-a8513f8a3e calico-kube-controllers-c4bfd986d-xlpx8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4258ab33f06 [] [] }} ContainerID="598946514db28f5905868d720cf516dccd3a764f1d5aaf362efaa209376a92a2" Namespace="calico-system" Pod="calico-kube-controllers-c4bfd986d-xlpx8" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-calico--kube--controllers--c4bfd986d--xlpx8-" Oct 28 05:19:00.834870 containerd[1596]: 2025-10-28 05:19:00.276 [INFO][4469] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="598946514db28f5905868d720cf516dccd3a764f1d5aaf362efaa209376a92a2" Namespace="calico-system" Pod="calico-kube-controllers-c4bfd986d-xlpx8" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-calico--kube--controllers--c4bfd986d--xlpx8-eth0" Oct 28 05:19:00.834870 containerd[1596]: 2025-10-28 05:19:00.420 [INFO][4503] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="598946514db28f5905868d720cf516dccd3a764f1d5aaf362efaa209376a92a2" HandleID="k8s-pod-network.598946514db28f5905868d720cf516dccd3a764f1d5aaf362efaa209376a92a2" Workload="ci--4501.0.0--n--a8513f8a3e-k8s-calico--kube--controllers--c4bfd986d--xlpx8-eth0" Oct 28 05:19:00.834870 containerd[1596]: 2025-10-28 05:19:00.420 [INFO][4503] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="598946514db28f5905868d720cf516dccd3a764f1d5aaf362efaa209376a92a2" HandleID="k8s-pod-network.598946514db28f5905868d720cf516dccd3a764f1d5aaf362efaa209376a92a2" Workload="ci--4501.0.0--n--a8513f8a3e-k8s-calico--kube--controllers--c4bfd986d--xlpx8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103950), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4501.0.0-n-a8513f8a3e", "pod":"calico-kube-controllers-c4bfd986d-xlpx8", "timestamp":"2025-10-28 05:19:00.420219478 +0000 UTC"}, Hostname:"ci-4501.0.0-n-a8513f8a3e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 05:19:00.834870 containerd[1596]: 2025-10-28 05:19:00.421 [INFO][4503] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 05:19:00.834870 containerd[1596]: 2025-10-28 05:19:00.620 [INFO][4503] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 05:19:00.834870 containerd[1596]: 2025-10-28 05:19:00.620 [INFO][4503] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4501.0.0-n-a8513f8a3e' Oct 28 05:19:00.834870 containerd[1596]: 2025-10-28 05:19:00.649 [INFO][4503] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.598946514db28f5905868d720cf516dccd3a764f1d5aaf362efaa209376a92a2" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:00.834870 containerd[1596]: 2025-10-28 05:19:00.693 [INFO][4503] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:00.834870 containerd[1596]: 2025-10-28 05:19:00.707 [INFO][4503] ipam/ipam.go 511: Trying affinity for 192.168.103.192/26 host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:00.834870 containerd[1596]: 2025-10-28 05:19:00.715 [INFO][4503] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.192/26 host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:00.834870 containerd[1596]: 2025-10-28 05:19:00.724 [INFO][4503] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:00.834870 containerd[1596]: 2025-10-28 05:19:00.725 [INFO][4503] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.598946514db28f5905868d720cf516dccd3a764f1d5aaf362efaa209376a92a2" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:00.834870 containerd[1596]: 2025-10-28 05:19:00.729 [INFO][4503] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.598946514db28f5905868d720cf516dccd3a764f1d5aaf362efaa209376a92a2 Oct 28 05:19:00.834870 containerd[1596]: 2025-10-28 05:19:00.751 [INFO][4503] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.598946514db28f5905868d720cf516dccd3a764f1d5aaf362efaa209376a92a2" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:00.834870 containerd[1596]: 2025-10-28 05:19:00.766 [INFO][4503] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.199/26] block=192.168.103.192/26 handle="k8s-pod-network.598946514db28f5905868d720cf516dccd3a764f1d5aaf362efaa209376a92a2" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:00.834870 containerd[1596]: 2025-10-28 05:19:00.769 [INFO][4503] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.199/26] handle="k8s-pod-network.598946514db28f5905868d720cf516dccd3a764f1d5aaf362efaa209376a92a2" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:00.834870 containerd[1596]: 2025-10-28 05:19:00.770 [INFO][4503] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 05:19:00.834870 containerd[1596]: 2025-10-28 05:19:00.770 [INFO][4503] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.199/26] IPv6=[] ContainerID="598946514db28f5905868d720cf516dccd3a764f1d5aaf362efaa209376a92a2" HandleID="k8s-pod-network.598946514db28f5905868d720cf516dccd3a764f1d5aaf362efaa209376a92a2" Workload="ci--4501.0.0--n--a8513f8a3e-k8s-calico--kube--controllers--c4bfd986d--xlpx8-eth0" Oct 28 05:19:00.837811 containerd[1596]: 2025-10-28 05:19:00.777 [INFO][4469] cni-plugin/k8s.go 418: Populated endpoint ContainerID="598946514db28f5905868d720cf516dccd3a764f1d5aaf362efaa209376a92a2" Namespace="calico-system" Pod="calico-kube-controllers-c4bfd986d-xlpx8" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-calico--kube--controllers--c4bfd986d--xlpx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4501.0.0--n--a8513f8a3e-k8s-calico--kube--controllers--c4bfd986d--xlpx8-eth0", GenerateName:"calico-kube-controllers-c4bfd986d-", Namespace:"calico-system", SelfLink:"", UID:"3e214f89-6d8a-48a0-9125-ff9ee356160a", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 18, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c4bfd986d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4501.0.0-n-a8513f8a3e", ContainerID:"", Pod:"calico-kube-controllers-c4bfd986d-xlpx8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4258ab33f06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:19:00.837811 containerd[1596]: 2025-10-28 05:19:00.777 [INFO][4469] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.199/32] ContainerID="598946514db28f5905868d720cf516dccd3a764f1d5aaf362efaa209376a92a2" Namespace="calico-system" Pod="calico-kube-controllers-c4bfd986d-xlpx8" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-calico--kube--controllers--c4bfd986d--xlpx8-eth0" Oct 28 05:19:00.837811 containerd[1596]: 2025-10-28 05:19:00.777 [INFO][4469] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4258ab33f06 ContainerID="598946514db28f5905868d720cf516dccd3a764f1d5aaf362efaa209376a92a2" Namespace="calico-system" Pod="calico-kube-controllers-c4bfd986d-xlpx8" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-calico--kube--controllers--c4bfd986d--xlpx8-eth0" Oct 28 05:19:00.837811 containerd[1596]: 2025-10-28 05:19:00.789 [INFO][4469] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="598946514db28f5905868d720cf516dccd3a764f1d5aaf362efaa209376a92a2" Namespace="calico-system" Pod="calico-kube-controllers-c4bfd986d-xlpx8" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-calico--kube--controllers--c4bfd986d--xlpx8-eth0" Oct 28 05:19:00.837811 containerd[1596]: 2025-10-28 05:19:00.797 [INFO][4469] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="598946514db28f5905868d720cf516dccd3a764f1d5aaf362efaa209376a92a2" Namespace="calico-system" Pod="calico-kube-controllers-c4bfd986d-xlpx8" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-calico--kube--controllers--c4bfd986d--xlpx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4501.0.0--n--a8513f8a3e-k8s-calico--kube--controllers--c4bfd986d--xlpx8-eth0", GenerateName:"calico-kube-controllers-c4bfd986d-", Namespace:"calico-system", SelfLink:"", UID:"3e214f89-6d8a-48a0-9125-ff9ee356160a", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 18, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c4bfd986d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4501.0.0-n-a8513f8a3e", ContainerID:"598946514db28f5905868d720cf516dccd3a764f1d5aaf362efaa209376a92a2", Pod:"calico-kube-controllers-c4bfd986d-xlpx8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4258ab33f06", MAC:"3a:30:10:df:a7:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:19:00.837811 containerd[1596]: 2025-10-28 05:19:00.826 [INFO][4469] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="598946514db28f5905868d720cf516dccd3a764f1d5aaf362efaa209376a92a2" Namespace="calico-system" Pod="calico-kube-controllers-c4bfd986d-xlpx8" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-calico--kube--controllers--c4bfd986d--xlpx8-eth0" Oct 28 05:19:00.836191 systemd[1]: Started cri-containerd-24553247cd0618b781f2f439eb26ec96fac7279b82ca39291cdbe5d155c3586b.scope - libcontainer container 24553247cd0618b781f2f439eb26ec96fac7279b82ca39291cdbe5d155c3586b. Oct 28 05:19:00.876416 containerd[1596]: time="2025-10-28T05:19:00.875871322Z" level=info msg="connecting to shim 598946514db28f5905868d720cf516dccd3a764f1d5aaf362efaa209376a92a2" address="unix:///run/containerd/s/2c0512d314400e48abf4c8e7d945122284c1a72080ce7e69838bd73d873ec5ba" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:19:00.937027 containerd[1596]: time="2025-10-28T05:19:00.936395497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-z42gp,Uid:10a83b40-012d-4ade-8ade-595315f777ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"24553247cd0618b781f2f439eb26ec96fac7279b82ca39291cdbe5d155c3586b\"" Oct 28 05:19:00.940253 kubelet[2761]: E1028 05:19:00.940186 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:19:00.947204 systemd[1]: Started cri-containerd-598946514db28f5905868d720cf516dccd3a764f1d5aaf362efaa209376a92a2.scope - libcontainer container 598946514db28f5905868d720cf516dccd3a764f1d5aaf362efaa209376a92a2. Oct 28 05:19:00.952853 containerd[1596]: time="2025-10-28T05:19:00.952289161Z" level=info msg="CreateContainer within sandbox \"24553247cd0618b781f2f439eb26ec96fac7279b82ca39291cdbe5d155c3586b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 28 05:19:00.970685 containerd[1596]: time="2025-10-28T05:19:00.970329392Z" level=info msg="Container 57c932140f25e064e352b25f5f363e2e1105836f48f60cc4f9b141e77001bd83: CDI devices from CRI Config.CDIDevices: []" Oct 28 05:19:00.983108 containerd[1596]: time="2025-10-28T05:19:00.982928445Z" level=info msg="CreateContainer within sandbox \"24553247cd0618b781f2f439eb26ec96fac7279b82ca39291cdbe5d155c3586b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"57c932140f25e064e352b25f5f363e2e1105836f48f60cc4f9b141e77001bd83\"" Oct 28 05:19:00.986711 containerd[1596]: time="2025-10-28T05:19:00.985690243Z" level=info msg="StartContainer for \"57c932140f25e064e352b25f5f363e2e1105836f48f60cc4f9b141e77001bd83\"" Oct 28 05:19:00.989869 containerd[1596]: time="2025-10-28T05:19:00.989706742Z" level=info msg="connecting to shim 57c932140f25e064e352b25f5f363e2e1105836f48f60cc4f9b141e77001bd83" address="unix:///run/containerd/s/e0a9b6d67824e24ec00753264cf369c53318c3faa7b246fec3351ac596e22699" protocol=ttrpc version=3 Oct 28 05:19:01.047893 systemd[1]: Started cri-containerd-57c932140f25e064e352b25f5f363e2e1105836f48f60cc4f9b141e77001bd83.scope - libcontainer container 57c932140f25e064e352b25f5f363e2e1105836f48f60cc4f9b141e77001bd83. Oct 28 05:19:01.069733 containerd[1596]: time="2025-10-28T05:19:01.068297676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d779dd4f-td9lt,Uid:5a48253c-a0f4-4a3b-ba79-be0ea189e322,Namespace:calico-apiserver,Attempt:0,}" Oct 28 05:19:01.130937 containerd[1596]: time="2025-10-28T05:19:01.130866709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d779dd4f-74h24,Uid:2dffcfa9-84ba-4077-b168-bc5f5cb035a9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e1987f42ec1e9d2ba5cb43438c2ef331b0ca9e0a5fa1955d2e3da66d55d375c9\"" Oct 28 05:19:01.134738 containerd[1596]: time="2025-10-28T05:19:01.134628461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 05:19:01.225506 containerd[1596]: time="2025-10-28T05:19:01.225094464Z" level=info msg="StartContainer for \"57c932140f25e064e352b25f5f363e2e1105836f48f60cc4f9b141e77001bd83\" returns successfully" Oct 28 05:19:01.384728 containerd[1596]: time="2025-10-28T05:19:01.384443417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4bfd986d-xlpx8,Uid:3e214f89-6d8a-48a0-9125-ff9ee356160a,Namespace:calico-system,Attempt:0,} returns sandbox id \"598946514db28f5905868d720cf516dccd3a764f1d5aaf362efaa209376a92a2\"" Oct 28 05:19:01.446287 kubelet[2761]: E1028 05:19:01.446249 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:19:01.459767 kubelet[2761]: E1028 05:19:01.459096 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:19:01.471122 systemd-networkd[1491]: caliac7e14f9203: Link UP Oct 28 05:19:01.474934 systemd-networkd[1491]: caliac7e14f9203: Gained carrier Oct 28 05:19:01.483754 kubelet[2761]: I1028 05:19:01.483679 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-z42gp" podStartSLOduration=45.483633708 podStartE2EDuration="45.483633708s" podCreationTimestamp="2025-10-28 05:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 05:19:01.478980616 +0000 UTC m=+52.607981506" watchObservedRunningTime="2025-10-28 05:19:01.483633708 +0000 UTC m=+52.612634596" Oct 28 05:19:01.503021 containerd[1596]: time="2025-10-28T05:19:01.502783769Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:19:01.504088 containerd[1596]: time="2025-10-28T05:19:01.503932884Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 05:19:01.504468 containerd[1596]: time="2025-10-28T05:19:01.504002983Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 05:19:01.505141 kubelet[2761]: E1028 05:19:01.504746 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 05:19:01.505141 kubelet[2761]: E1028 05:19:01.504818 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 05:19:01.505141 kubelet[2761]: E1028 05:19:01.505015 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-85d779dd4f-74h24_calico-apiserver(2dffcfa9-84ba-4077-b168-bc5f5cb035a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 05:19:01.505141 kubelet[2761]: E1028 05:19:01.505079 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85d779dd4f-74h24" podUID="2dffcfa9-84ba-4077-b168-bc5f5cb035a9" Oct 28 05:19:01.506188 containerd[1596]: time="2025-10-28T05:19:01.505983290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 28 05:19:01.537884 containerd[1596]: 2025-10-28 05:19:01.185 [INFO][4693] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 28 05:19:01.537884 containerd[1596]: 2025-10-28 05:19:01.228 [INFO][4693] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4501.0.0--n--a8513f8a3e-k8s-calico--apiserver--85d779dd4f--td9lt-eth0 calico-apiserver-85d779dd4f- calico-apiserver 5a48253c-a0f4-4a3b-ba79-be0ea189e322 851 0 2025-10-28 05:18:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85d779dd4f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4501.0.0-n-a8513f8a3e calico-apiserver-85d779dd4f-td9lt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliac7e14f9203 [] [] }} ContainerID="a4e27715889b10b4bba3a54ee8b0303b5dc600aad99594ea556211c9b0494dae" Namespace="calico-apiserver" Pod="calico-apiserver-85d779dd4f-td9lt" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-calico--apiserver--85d779dd4f--td9lt-" Oct 28 05:19:01.537884 containerd[1596]: 2025-10-28 05:19:01.228 [INFO][4693] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a4e27715889b10b4bba3a54ee8b0303b5dc600aad99594ea556211c9b0494dae" Namespace="calico-apiserver" Pod="calico-apiserver-85d779dd4f-td9lt" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-calico--apiserver--85d779dd4f--td9lt-eth0" Oct 28 05:19:01.537884 containerd[1596]: 2025-10-28 05:19:01.307 [INFO][4717] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4e27715889b10b4bba3a54ee8b0303b5dc600aad99594ea556211c9b0494dae" HandleID="k8s-pod-network.a4e27715889b10b4bba3a54ee8b0303b5dc600aad99594ea556211c9b0494dae" Workload="ci--4501.0.0--n--a8513f8a3e-k8s-calico--apiserver--85d779dd4f--td9lt-eth0" Oct 28 05:19:01.537884 containerd[1596]: 2025-10-28 05:19:01.311 [INFO][4717] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a4e27715889b10b4bba3a54ee8b0303b5dc600aad99594ea556211c9b0494dae" HandleID="k8s-pod-network.a4e27715889b10b4bba3a54ee8b0303b5dc600aad99594ea556211c9b0494dae" Workload="ci--4501.0.0--n--a8513f8a3e-k8s-calico--apiserver--85d779dd4f--td9lt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000323880), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4501.0.0-n-a8513f8a3e", "pod":"calico-apiserver-85d779dd4f-td9lt", "timestamp":"2025-10-28 05:19:01.307663136 +0000 UTC"}, Hostname:"ci-4501.0.0-n-a8513f8a3e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 05:19:01.537884 containerd[1596]: 2025-10-28 05:19:01.311 [INFO][4717] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 05:19:01.537884 containerd[1596]: 2025-10-28 05:19:01.311 [INFO][4717] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 05:19:01.537884 containerd[1596]: 2025-10-28 05:19:01.311 [INFO][4717] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4501.0.0-n-a8513f8a3e' Oct 28 05:19:01.537884 containerd[1596]: 2025-10-28 05:19:01.331 [INFO][4717] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a4e27715889b10b4bba3a54ee8b0303b5dc600aad99594ea556211c9b0494dae" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:01.537884 containerd[1596]: 2025-10-28 05:19:01.366 [INFO][4717] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:01.537884 containerd[1596]: 2025-10-28 05:19:01.386 [INFO][4717] ipam/ipam.go 511: Trying affinity for 192.168.103.192/26 host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:01.537884 containerd[1596]: 2025-10-28 05:19:01.394 [INFO][4717] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.192/26 host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:01.537884 containerd[1596]: 2025-10-28 05:19:01.399 [INFO][4717] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:01.537884 containerd[1596]: 2025-10-28 05:19:01.400 [INFO][4717] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.a4e27715889b10b4bba3a54ee8b0303b5dc600aad99594ea556211c9b0494dae" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:01.537884 containerd[1596]: 2025-10-28 05:19:01.404 [INFO][4717] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a4e27715889b10b4bba3a54ee8b0303b5dc600aad99594ea556211c9b0494dae Oct 28 05:19:01.537884 containerd[1596]: 2025-10-28 05:19:01.425 [INFO][4717] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.a4e27715889b10b4bba3a54ee8b0303b5dc600aad99594ea556211c9b0494dae" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:01.537884 containerd[1596]: 2025-10-28 05:19:01.450 [INFO][4717] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.200/26] block=192.168.103.192/26 handle="k8s-pod-network.a4e27715889b10b4bba3a54ee8b0303b5dc600aad99594ea556211c9b0494dae" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:01.537884 containerd[1596]: 2025-10-28 05:19:01.450 [INFO][4717] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.200/26] handle="k8s-pod-network.a4e27715889b10b4bba3a54ee8b0303b5dc600aad99594ea556211c9b0494dae" host="ci-4501.0.0-n-a8513f8a3e" Oct 28 05:19:01.537884 containerd[1596]: 2025-10-28 05:19:01.450 [INFO][4717] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 05:19:01.537884 containerd[1596]: 2025-10-28 05:19:01.450 [INFO][4717] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.200/26] IPv6=[] ContainerID="a4e27715889b10b4bba3a54ee8b0303b5dc600aad99594ea556211c9b0494dae" HandleID="k8s-pod-network.a4e27715889b10b4bba3a54ee8b0303b5dc600aad99594ea556211c9b0494dae" Workload="ci--4501.0.0--n--a8513f8a3e-k8s-calico--apiserver--85d779dd4f--td9lt-eth0" Oct 28 05:19:01.545627 containerd[1596]: 2025-10-28 05:19:01.462 [INFO][4693] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a4e27715889b10b4bba3a54ee8b0303b5dc600aad99594ea556211c9b0494dae" Namespace="calico-apiserver" Pod="calico-apiserver-85d779dd4f-td9lt" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-calico--apiserver--85d779dd4f--td9lt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4501.0.0--n--a8513f8a3e-k8s-calico--apiserver--85d779dd4f--td9lt-eth0", GenerateName:"calico-apiserver-85d779dd4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5a48253c-a0f4-4a3b-ba79-be0ea189e322", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 18, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85d779dd4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4501.0.0-n-a8513f8a3e", ContainerID:"", Pod:"calico-apiserver-85d779dd4f-td9lt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac7e14f9203", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:19:01.545627 containerd[1596]: 2025-10-28 05:19:01.462 [INFO][4693] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.200/32] ContainerID="a4e27715889b10b4bba3a54ee8b0303b5dc600aad99594ea556211c9b0494dae" Namespace="calico-apiserver" Pod="calico-apiserver-85d779dd4f-td9lt" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-calico--apiserver--85d779dd4f--td9lt-eth0" Oct 28 05:19:01.545627 containerd[1596]: 2025-10-28 05:19:01.462 [INFO][4693] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac7e14f9203 ContainerID="a4e27715889b10b4bba3a54ee8b0303b5dc600aad99594ea556211c9b0494dae" Namespace="calico-apiserver" Pod="calico-apiserver-85d779dd4f-td9lt" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-calico--apiserver--85d779dd4f--td9lt-eth0" Oct 28 05:19:01.545627 containerd[1596]: 2025-10-28 05:19:01.474 [INFO][4693] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4e27715889b10b4bba3a54ee8b0303b5dc600aad99594ea556211c9b0494dae" Namespace="calico-apiserver" Pod="calico-apiserver-85d779dd4f-td9lt" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-calico--apiserver--85d779dd4f--td9lt-eth0" Oct 28 05:19:01.545627 containerd[1596]: 2025-10-28 05:19:01.477 [INFO][4693] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a4e27715889b10b4bba3a54ee8b0303b5dc600aad99594ea556211c9b0494dae" Namespace="calico-apiserver" Pod="calico-apiserver-85d779dd4f-td9lt" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-calico--apiserver--85d779dd4f--td9lt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4501.0.0--n--a8513f8a3e-k8s-calico--apiserver--85d779dd4f--td9lt-eth0", GenerateName:"calico-apiserver-85d779dd4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5a48253c-a0f4-4a3b-ba79-be0ea189e322", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 18, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85d779dd4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4501.0.0-n-a8513f8a3e", ContainerID:"a4e27715889b10b4bba3a54ee8b0303b5dc600aad99594ea556211c9b0494dae", Pod:"calico-apiserver-85d779dd4f-td9lt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac7e14f9203", MAC:"6a:62:74:14:1f:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:19:01.545627 containerd[1596]: 2025-10-28 05:19:01.522 [INFO][4693] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a4e27715889b10b4bba3a54ee8b0303b5dc600aad99594ea556211c9b0494dae" Namespace="calico-apiserver" Pod="calico-apiserver-85d779dd4f-td9lt" WorkloadEndpoint="ci--4501.0.0--n--a8513f8a3e-k8s-calico--apiserver--85d779dd4f--td9lt-eth0" Oct 28 05:19:01.618982 containerd[1596]: time="2025-10-28T05:19:01.618859059Z" level=info msg="connecting to shim a4e27715889b10b4bba3a54ee8b0303b5dc600aad99594ea556211c9b0494dae" address="unix:///run/containerd/s/138585345aace0f124fb03dc47423b6d7ba61d5f81f9f3ef71712a863977e9be" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:19:01.717624 systemd[1]: Started cri-containerd-a4e27715889b10b4bba3a54ee8b0303b5dc600aad99594ea556211c9b0494dae.scope - libcontainer container a4e27715889b10b4bba3a54ee8b0303b5dc600aad99594ea556211c9b0494dae. Oct 28 05:19:01.872093 containerd[1596]: time="2025-10-28T05:19:01.871821654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d779dd4f-td9lt,Uid:5a48253c-a0f4-4a3b-ba79-be0ea189e322,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a4e27715889b10b4bba3a54ee8b0303b5dc600aad99594ea556211c9b0494dae\"" Oct 28 05:19:01.990855 systemd-networkd[1491]: cali9e204a2cfea: Gained IPv6LL Oct 28 05:19:01.996980 containerd[1596]: time="2025-10-28T05:19:01.996692891Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:19:02.024714 containerd[1596]: time="2025-10-28T05:19:02.023672705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 28 05:19:02.024714 containerd[1596]: time="2025-10-28T05:19:02.023770892Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 28 05:19:02.025608 kubelet[2761]: E1028 05:19:02.024353 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 05:19:02.025608 kubelet[2761]: E1028 05:19:02.024422 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 05:19:02.032889 kubelet[2761]: E1028 05:19:02.027831 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-c4bfd986d-xlpx8_calico-system(3e214f89-6d8a-48a0-9125-ff9ee356160a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 28 05:19:02.032889 kubelet[2761]: E1028 05:19:02.027925 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c4bfd986d-xlpx8" podUID="3e214f89-6d8a-48a0-9125-ff9ee356160a" Oct 28 05:19:02.034452 containerd[1596]: time="2025-10-28T05:19:02.033676182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 05:19:02.118920 systemd-networkd[1491]: calif837d7c6c0c: Gained IPv6LL Oct 28 05:19:02.246457 systemd-networkd[1491]: cali4258ab33f06: Gained IPv6LL Oct 28 05:19:02.384822 containerd[1596]: time="2025-10-28T05:19:02.384672281Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:19:02.386530 containerd[1596]: time="2025-10-28T05:19:02.386340156Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 05:19:02.386530 containerd[1596]: time="2025-10-28T05:19:02.386481606Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 05:19:02.386921 kubelet[2761]: E1028 05:19:02.386834 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 05:19:02.387003 kubelet[2761]: E1028 05:19:02.386925 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 05:19:02.387786 kubelet[2761]: E1028 05:19:02.387718 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-85d779dd4f-td9lt_calico-apiserver(5a48253c-a0f4-4a3b-ba79-be0ea189e322): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 05:19:02.387916 kubelet[2761]: E1028 05:19:02.387798 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85d779dd4f-td9lt" podUID="5a48253c-a0f4-4a3b-ba79-be0ea189e322" Oct 28 05:19:02.471011 kubelet[2761]: E1028 05:19:02.470963 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:19:02.474222 kubelet[2761]: E1028 05:19:02.474116 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85d779dd4f-74h24" podUID="2dffcfa9-84ba-4077-b168-bc5f5cb035a9" Oct 28 05:19:02.475008 kubelet[2761]: E1028 05:19:02.474722 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85d779dd4f-td9lt" podUID="5a48253c-a0f4-4a3b-ba79-be0ea189e322" Oct 28 05:19:02.475740 kubelet[2761]: E1028 05:19:02.475438 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c4bfd986d-xlpx8" podUID="3e214f89-6d8a-48a0-9125-ff9ee356160a" Oct 28 05:19:02.600930 kubelet[2761]: I1028 05:19:02.600274 2761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 28 05:19:02.601503 kubelet[2761]: E1028 05:19:02.601443 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:19:02.629901 systemd-networkd[1491]: caliac7e14f9203: Gained IPv6LL Oct 28 05:19:03.474741 kubelet[2761]: E1028 05:19:03.474564 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:19:03.476764 kubelet[2761]: E1028 05:19:03.476707 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:19:03.477623 kubelet[2761]: E1028 05:19:03.477588 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85d779dd4f-td9lt" podUID="5a48253c-a0f4-4a3b-ba79-be0ea189e322" Oct 28 05:19:03.775145 systemd-networkd[1491]: vxlan.calico: Link UP Oct 28 05:19:03.775151 systemd-networkd[1491]: vxlan.calico: Gained carrier Oct 28 05:19:04.259071 systemd[1]: Started sshd@7-164.92.80.11:22-139.178.89.65:47844.service - OpenSSH per-connection server daemon (139.178.89.65:47844). Oct 28 05:19:04.401441 sshd[4910]: Accepted publickey for core from 139.178.89.65 port 47844 ssh2: RSA SHA256:fMMt40OOzhunw8taaYk6/dlxAvnUrQ4UxcBuxtZpXJk Oct 28 05:19:04.404423 sshd-session[4910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:19:04.411992 systemd-logind[1570]: New session 8 of user core. Oct 28 05:19:04.417969 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 28 05:19:05.040059 sshd[4917]: Connection closed by 139.178.89.65 port 47844 Oct 28 05:19:05.040886 sshd-session[4910]: pam_unix(sshd:session): session closed for user core Oct 28 05:19:05.048143 systemd[1]: sshd@7-164.92.80.11:22-139.178.89.65:47844.service: Deactivated successfully. Oct 28 05:19:05.051672 systemd[1]: session-8.scope: Deactivated successfully. Oct 28 05:19:05.053711 systemd-logind[1570]: Session 8 logged out. Waiting for processes to exit. Oct 28 05:19:05.057595 systemd-logind[1570]: Removed session 8. Oct 28 05:19:05.317898 systemd-networkd[1491]: vxlan.calico: Gained IPv6LL Oct 28 05:19:10.061129 systemd[1]: Started sshd@8-164.92.80.11:22-139.178.89.65:59904.service - OpenSSH per-connection server daemon (139.178.89.65:59904). Oct 28 05:19:10.144915 sshd[4983]: Accepted publickey for core from 139.178.89.65 port 59904 ssh2: RSA SHA256:fMMt40OOzhunw8taaYk6/dlxAvnUrQ4UxcBuxtZpXJk Oct 28 05:19:10.147013 sshd-session[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:19:10.153729 systemd-logind[1570]: New session 9 of user core. Oct 28 05:19:10.162956 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 28 05:19:10.296403 sshd[4986]: Connection closed by 139.178.89.65 port 59904 Oct 28 05:19:10.296949 sshd-session[4983]: pam_unix(sshd:session): session closed for user core Oct 28 05:19:10.304138 systemd[1]: sshd@8-164.92.80.11:22-139.178.89.65:59904.service: Deactivated successfully. Oct 28 05:19:10.307237 systemd[1]: session-9.scope: Deactivated successfully. Oct 28 05:19:10.308828 systemd-logind[1570]: Session 9 logged out. Waiting for processes to exit. Oct 28 05:19:10.310501 systemd-logind[1570]: Removed session 9. Oct 28 05:19:11.065211 containerd[1596]: time="2025-10-28T05:19:11.064673456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 28 05:19:11.376755 containerd[1596]: time="2025-10-28T05:19:11.376341561Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:19:11.384469 containerd[1596]: time="2025-10-28T05:19:11.384325858Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 28 05:19:11.384469 containerd[1596]: time="2025-10-28T05:19:11.384411449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 28 05:19:11.384778 kubelet[2761]: E1028 05:19:11.384710 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 05:19:11.385616 kubelet[2761]: E1028 05:19:11.384793 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 05:19:11.385616 kubelet[2761]: E1028 05:19:11.384958 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-64c845d8b5-xc7c2_calico-system(a4214a7a-e8c1-4291-b44c-d7574f9e0241): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 28 05:19:11.387524 containerd[1596]: time="2025-10-28T05:19:11.387177295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 28 05:19:11.731872 containerd[1596]: time="2025-10-28T05:19:11.731769311Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:19:11.733294 containerd[1596]: time="2025-10-28T05:19:11.733223349Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 28 05:19:11.734695 kubelet[2761]: E1028 05:19:11.733941 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 05:19:11.734695 kubelet[2761]: E1028 05:19:11.734008 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 05:19:11.734695 kubelet[2761]: E1028 05:19:11.734116 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-64c845d8b5-xc7c2_calico-system(a4214a7a-e8c1-4291-b44c-d7574f9e0241): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 28 05:19:11.735074 kubelet[2761]: E1028 05:19:11.734172 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64c845d8b5-xc7c2" podUID="a4214a7a-e8c1-4291-b44c-d7574f9e0241" Oct 28 05:19:11.737664 containerd[1596]: time="2025-10-28T05:19:11.733319015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 28 05:19:12.069849 containerd[1596]: time="2025-10-28T05:19:12.069650878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 28 05:19:12.389603 containerd[1596]: time="2025-10-28T05:19:12.389423295Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:19:12.390243 containerd[1596]: time="2025-10-28T05:19:12.390184924Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 28 05:19:12.390357 containerd[1596]: time="2025-10-28T05:19:12.390286093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 28 05:19:12.390628 kubelet[2761]: E1028 05:19:12.390556 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 05:19:12.390628 kubelet[2761]: E1028 05:19:12.390623 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 05:19:12.391928 kubelet[2761]: E1028 05:19:12.390813 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-2jltp_calico-system(1c4104d9-7ba6-4171-8c43-b6c170ee1774): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 28 05:19:12.391928 kubelet[2761]: E1028 05:19:12.390870 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2jltp" podUID="1c4104d9-7ba6-4171-8c43-b6c170ee1774" Oct 28 05:19:14.064782 containerd[1596]: time="2025-10-28T05:19:14.064678022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 05:19:14.375021 containerd[1596]: time="2025-10-28T05:19:14.374875542Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:19:14.375726 containerd[1596]: time="2025-10-28T05:19:14.375669868Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 05:19:14.375875 containerd[1596]: time="2025-10-28T05:19:14.375695920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 05:19:14.376022 kubelet[2761]: E1028 05:19:14.375975 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 05:19:14.376557 kubelet[2761]: E1028 05:19:14.376022 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 05:19:14.376602 containerd[1596]: time="2025-10-28T05:19:14.376390696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 28 05:19:14.376966 kubelet[2761]: E1028 05:19:14.376829 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-85d779dd4f-74h24_calico-apiserver(2dffcfa9-84ba-4077-b168-bc5f5cb035a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 05:19:14.376966 kubelet[2761]: E1028 05:19:14.376890 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85d779dd4f-74h24" podUID="2dffcfa9-84ba-4077-b168-bc5f5cb035a9" Oct 28 05:19:14.711907 containerd[1596]: time="2025-10-28T05:19:14.711768479Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:19:14.713058 containerd[1596]: time="2025-10-28T05:19:14.712939866Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 28 05:19:14.713058 containerd[1596]: time="2025-10-28T05:19:14.712980529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 28 05:19:14.713539 kubelet[2761]: E1028 05:19:14.713466 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 05:19:14.713702 kubelet[2761]: E1028 05:19:14.713525 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 05:19:14.714312 kubelet[2761]: E1028 05:19:14.714254 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-c4bfd986d-xlpx8_calico-system(3e214f89-6d8a-48a0-9125-ff9ee356160a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 28 05:19:14.714479 kubelet[2761]: E1028 05:19:14.714438 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c4bfd986d-xlpx8" podUID="3e214f89-6d8a-48a0-9125-ff9ee356160a" Oct 28 05:19:15.069855 containerd[1596]: time="2025-10-28T05:19:15.069807783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 28 05:19:15.314296 systemd[1]: Started sshd@9-164.92.80.11:22-139.178.89.65:59914.service - OpenSSH per-connection server daemon (139.178.89.65:59914). Oct 28 05:19:15.410809 containerd[1596]: time="2025-10-28T05:19:15.410361843Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:19:15.411161 containerd[1596]: time="2025-10-28T05:19:15.411064793Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 28 05:19:15.411246 containerd[1596]: time="2025-10-28T05:19:15.411163513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 28 05:19:15.411669 kubelet[2761]: E1028 05:19:15.411575 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 05:19:15.411669 kubelet[2761]: E1028 05:19:15.411630 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 05:19:15.412494 kubelet[2761]: E1028 05:19:15.411754 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-pj4dv_calico-system(65eac80b-7114-46da-934f-c797b4afa603): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 28 05:19:15.414575 containerd[1596]: time="2025-10-28T05:19:15.414529018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 28 05:19:15.426566 sshd[5006]: Accepted publickey for core from 139.178.89.65 port 59914 ssh2: RSA SHA256:fMMt40OOzhunw8taaYk6/dlxAvnUrQ4UxcBuxtZpXJk Oct 28 05:19:15.429830 sshd-session[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:19:15.437915 systemd-logind[1570]: New session 10 of user core. Oct 28 05:19:15.443947 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 28 05:19:15.624715 sshd[5009]: Connection closed by 139.178.89.65 port 59914 Oct 28 05:19:15.635798 sshd-session[5006]: pam_unix(sshd:session): session closed for user core Oct 28 05:19:15.644050 systemd[1]: Started sshd@10-164.92.80.11:22-139.178.89.65:59918.service - OpenSSH per-connection server daemon (139.178.89.65:59918). Oct 28 05:19:15.645827 systemd[1]: sshd@9-164.92.80.11:22-139.178.89.65:59914.service: Deactivated successfully. Oct 28 05:19:15.650636 systemd[1]: session-10.scope: Deactivated successfully. Oct 28 05:19:15.654873 systemd-logind[1570]: Session 10 logged out. Waiting for processes to exit. Oct 28 05:19:15.657907 systemd-logind[1570]: Removed session 10. Oct 28 05:19:15.713550 containerd[1596]: time="2025-10-28T05:19:15.713376790Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:19:15.714540 containerd[1596]: time="2025-10-28T05:19:15.714201077Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 28 05:19:15.714540 containerd[1596]: time="2025-10-28T05:19:15.714328119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 28 05:19:15.715659 kubelet[2761]: E1028 05:19:15.715591 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 05:19:15.718239 kubelet[2761]: E1028 05:19:15.715893 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 05:19:15.718239 kubelet[2761]: E1028 05:19:15.718079 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-pj4dv_calico-system(65eac80b-7114-46da-934f-c797b4afa603): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 28 05:19:15.718239 kubelet[2761]: E1028 05:19:15.718172 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pj4dv" podUID="65eac80b-7114-46da-934f-c797b4afa603" Oct 28 05:19:15.765870 sshd[5019]: Accepted publickey for core from 139.178.89.65 port 59918 ssh2: RSA SHA256:fMMt40OOzhunw8taaYk6/dlxAvnUrQ4UxcBuxtZpXJk Oct 28 05:19:15.768490 sshd-session[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:19:15.777981 systemd-logind[1570]: New session 11 of user core. Oct 28 05:19:15.787006 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 28 05:19:16.000123 sshd[5025]: Connection closed by 139.178.89.65 port 59918 Oct 28 05:19:16.002930 sshd-session[5019]: pam_unix(sshd:session): session closed for user core Oct 28 05:19:16.024731 systemd[1]: sshd@10-164.92.80.11:22-139.178.89.65:59918.service: Deactivated successfully. Oct 28 05:19:16.029354 systemd[1]: session-11.scope: Deactivated successfully. Oct 28 05:19:16.034211 systemd-logind[1570]: Session 11 logged out. Waiting for processes to exit. Oct 28 05:19:16.043054 systemd[1]: Started sshd@11-164.92.80.11:22-139.178.89.65:43940.service - OpenSSH per-connection server daemon (139.178.89.65:43940). Oct 28 05:19:16.046305 systemd-logind[1570]: Removed session 11. Oct 28 05:19:16.065272 containerd[1596]: time="2025-10-28T05:19:16.065195122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 05:19:16.121848 sshd[5036]: Accepted publickey for core from 139.178.89.65 port 43940 ssh2: RSA SHA256:fMMt40OOzhunw8taaYk6/dlxAvnUrQ4UxcBuxtZpXJk Oct 28 05:19:16.124586 sshd-session[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:19:16.137786 systemd-logind[1570]: New session 12 of user core. Oct 28 05:19:16.141156 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 28 05:19:16.272695 sshd[5039]: Connection closed by 139.178.89.65 port 43940 Oct 28 05:19:16.271299 sshd-session[5036]: pam_unix(sshd:session): session closed for user core Oct 28 05:19:16.277233 systemd[1]: sshd@11-164.92.80.11:22-139.178.89.65:43940.service: Deactivated successfully. Oct 28 05:19:16.277843 systemd-logind[1570]: Session 12 logged out. Waiting for processes to exit. Oct 28 05:19:16.280665 systemd[1]: session-12.scope: Deactivated successfully. Oct 28 05:19:16.285466 systemd-logind[1570]: Removed session 12. Oct 28 05:19:16.403735 containerd[1596]: time="2025-10-28T05:19:16.403626530Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:19:16.404492 containerd[1596]: time="2025-10-28T05:19:16.404455635Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 05:19:16.404610 containerd[1596]: time="2025-10-28T05:19:16.404545124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 05:19:16.404813 kubelet[2761]: E1028 05:19:16.404769 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 05:19:16.404922 kubelet[2761]: E1028 05:19:16.404834 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 05:19:16.404969 kubelet[2761]: E1028 05:19:16.404940 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-85d779dd4f-td9lt_calico-apiserver(5a48253c-a0f4-4a3b-ba79-be0ea189e322): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 05:19:16.405019 kubelet[2761]: E1028 05:19:16.404985 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85d779dd4f-td9lt" podUID="5a48253c-a0f4-4a3b-ba79-be0ea189e322" Oct 28 05:19:21.288875 systemd[1]: Started sshd@12-164.92.80.11:22-139.178.89.65:43942.service - OpenSSH per-connection server daemon (139.178.89.65:43942). Oct 28 05:19:21.369144 sshd[5056]: Accepted publickey for core from 139.178.89.65 port 43942 ssh2: RSA SHA256:fMMt40OOzhunw8taaYk6/dlxAvnUrQ4UxcBuxtZpXJk Oct 28 05:19:21.370743 sshd-session[5056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:19:21.378036 systemd-logind[1570]: New session 13 of user core. Oct 28 05:19:21.391160 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 28 05:19:21.491788 sshd[5060]: Connection closed by 139.178.89.65 port 43942 Oct 28 05:19:21.492803 sshd-session[5056]: pam_unix(sshd:session): session closed for user core Oct 28 05:19:21.498591 systemd[1]: sshd@12-164.92.80.11:22-139.178.89.65:43942.service: Deactivated successfully. Oct 28 05:19:21.502587 systemd[1]: session-13.scope: Deactivated successfully. Oct 28 05:19:21.504612 systemd-logind[1570]: Session 13 logged out. Waiting for processes to exit. Oct 28 05:19:21.506553 systemd-logind[1570]: Removed session 13. Oct 28 05:19:23.063494 kubelet[2761]: E1028 05:19:23.063188 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:19:23.068383 kubelet[2761]: E1028 05:19:23.068196 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64c845d8b5-xc7c2" podUID="a4214a7a-e8c1-4291-b44c-d7574f9e0241" Oct 28 05:19:25.065618 kubelet[2761]: E1028 05:19:25.065275 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2jltp" podUID="1c4104d9-7ba6-4171-8c43-b6c170ee1774" Oct 28 05:19:25.066401 kubelet[2761]: E1028 05:19:25.066353 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85d779dd4f-74h24" podUID="2dffcfa9-84ba-4077-b168-bc5f5cb035a9" Oct 28 05:19:26.065135 kubelet[2761]: E1028 05:19:26.064967 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c4bfd986d-xlpx8" podUID="3e214f89-6d8a-48a0-9125-ff9ee356160a" Oct 28 05:19:26.512111 systemd[1]: Started sshd@13-164.92.80.11:22-139.178.89.65:48194.service - OpenSSH per-connection server daemon (139.178.89.65:48194). Oct 28 05:19:26.585937 sshd[5081]: Accepted publickey for core from 139.178.89.65 port 48194 ssh2: RSA SHA256:fMMt40OOzhunw8taaYk6/dlxAvnUrQ4UxcBuxtZpXJk Oct 28 05:19:26.587531 sshd-session[5081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:19:26.594385 systemd-logind[1570]: New session 14 of user core. Oct 28 05:19:26.600943 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 28 05:19:26.737970 sshd[5084]: Connection closed by 139.178.89.65 port 48194 Oct 28 05:19:26.739038 sshd-session[5081]: pam_unix(sshd:session): session closed for user core Oct 28 05:19:26.750735 systemd[1]: sshd@13-164.92.80.11:22-139.178.89.65:48194.service: Deactivated successfully. Oct 28 05:19:26.753605 systemd[1]: session-14.scope: Deactivated successfully. Oct 28 05:19:26.755309 systemd-logind[1570]: Session 14 logged out. Waiting for processes to exit. Oct 28 05:19:26.757083 systemd-logind[1570]: Removed session 14. Oct 28 05:19:26.905691 containerd[1596]: time="2025-10-28T05:19:26.905466193Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4db498e89a2197db3158e2b4616e41b8baf8f99041d1c4c335b4618cddff042f\" id:\"562fa3e17a72071dcddb16f8f8b9821cb98f968b95dd0b58fd4cd18920ed3e18\" pid:5108 exited_at:{seconds:1761628766 nanos:904831561}" Oct 28 05:19:26.914786 kubelet[2761]: E1028 05:19:26.914734 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:19:27.062824 kubelet[2761]: E1028 05:19:27.062480 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:19:29.065732 kubelet[2761]: E1028 05:19:29.064451 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85d779dd4f-td9lt" podUID="5a48253c-a0f4-4a3b-ba79-be0ea189e322" Oct 28 05:19:30.067100 kubelet[2761]: E1028 05:19:30.066932 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pj4dv" podUID="65eac80b-7114-46da-934f-c797b4afa603" Oct 28 05:19:31.757087 systemd[1]: Started sshd@14-164.92.80.11:22-139.178.89.65:48210.service - OpenSSH per-connection server daemon (139.178.89.65:48210). Oct 28 05:19:31.881185 sshd[5121]: Accepted publickey for core from 139.178.89.65 port 48210 ssh2: RSA SHA256:fMMt40OOzhunw8taaYk6/dlxAvnUrQ4UxcBuxtZpXJk Oct 28 05:19:31.885079 sshd-session[5121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:19:31.895211 systemd-logind[1570]: New session 15 of user core. Oct 28 05:19:31.897881 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 28 05:19:32.033743 sshd[5124]: Connection closed by 139.178.89.65 port 48210 Oct 28 05:19:32.034489 sshd-session[5121]: pam_unix(sshd:session): session closed for user core Oct 28 05:19:32.040808 systemd-logind[1570]: Session 15 logged out. Waiting for processes to exit. Oct 28 05:19:32.041101 systemd[1]: sshd@14-164.92.80.11:22-139.178.89.65:48210.service: Deactivated successfully. Oct 28 05:19:32.044207 systemd[1]: session-15.scope: Deactivated successfully. Oct 28 05:19:32.046372 systemd-logind[1570]: Removed session 15. Oct 28 05:19:35.064518 kubelet[2761]: E1028 05:19:35.064253 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:19:36.063745 containerd[1596]: time="2025-10-28T05:19:36.063284541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 28 05:19:36.375360 containerd[1596]: time="2025-10-28T05:19:36.375017964Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:19:36.375963 containerd[1596]: time="2025-10-28T05:19:36.375917899Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 28 05:19:36.376154 containerd[1596]: time="2025-10-28T05:19:36.375947543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 28 05:19:36.376258 kubelet[2761]: E1028 05:19:36.376164 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 05:19:36.376258 kubelet[2761]: E1028 05:19:36.376207 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 05:19:36.384761 kubelet[2761]: E1028 05:19:36.384453 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-64c845d8b5-xc7c2_calico-system(a4214a7a-e8c1-4291-b44c-d7574f9e0241): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 28 05:19:36.386856 containerd[1596]: time="2025-10-28T05:19:36.386811472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 28 05:19:36.707278 containerd[1596]: time="2025-10-28T05:19:36.707145144Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:19:36.710173 containerd[1596]: time="2025-10-28T05:19:36.710113540Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 28 05:19:36.710173 containerd[1596]: time="2025-10-28T05:19:36.710138072Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 28 05:19:36.710467 kubelet[2761]: E1028 05:19:36.710417 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 05:19:36.710511 kubelet[2761]: E1028 05:19:36.710462 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 05:19:36.710591 kubelet[2761]: E1028 05:19:36.710537 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-64c845d8b5-xc7c2_calico-system(a4214a7a-e8c1-4291-b44c-d7574f9e0241): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 28 05:19:36.710625 kubelet[2761]: E1028 05:19:36.710585 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64c845d8b5-xc7c2" podUID="a4214a7a-e8c1-4291-b44c-d7574f9e0241" Oct 28 05:19:37.055003 systemd[1]: Started sshd@15-164.92.80.11:22-139.178.89.65:53250.service - OpenSSH per-connection server daemon (139.178.89.65:53250). Oct 28 05:19:37.131466 sshd[5135]: Accepted publickey for core from 139.178.89.65 port 53250 ssh2: RSA SHA256:fMMt40OOzhunw8taaYk6/dlxAvnUrQ4UxcBuxtZpXJk Oct 28 05:19:37.133603 sshd-session[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:19:37.140831 systemd-logind[1570]: New session 16 of user core. Oct 28 05:19:37.146892 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 28 05:19:37.311049 sshd[5138]: Connection closed by 139.178.89.65 port 53250 Oct 28 05:19:37.315439 sshd-session[5135]: pam_unix(sshd:session): session closed for user core Oct 28 05:19:37.327821 systemd[1]: sshd@15-164.92.80.11:22-139.178.89.65:53250.service: Deactivated successfully. Oct 28 05:19:37.332353 systemd[1]: session-16.scope: Deactivated successfully. Oct 28 05:19:37.336048 systemd-logind[1570]: Session 16 logged out. Waiting for processes to exit. Oct 28 05:19:37.340313 systemd-logind[1570]: Removed session 16. Oct 28 05:19:37.345211 systemd[1]: Started sshd@16-164.92.80.11:22-139.178.89.65:53262.service - OpenSSH per-connection server daemon (139.178.89.65:53262). Oct 28 05:19:37.462429 sshd[5150]: Accepted publickey for core from 139.178.89.65 port 53262 ssh2: RSA SHA256:fMMt40OOzhunw8taaYk6/dlxAvnUrQ4UxcBuxtZpXJk Oct 28 05:19:37.465070 sshd-session[5150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:19:37.477031 systemd-logind[1570]: New session 17 of user core. Oct 28 05:19:37.483841 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 28 05:19:37.802393 sshd[5153]: Connection closed by 139.178.89.65 port 53262 Oct 28 05:19:37.805771 sshd-session[5150]: pam_unix(sshd:session): session closed for user core Oct 28 05:19:37.820808 systemd[1]: sshd@16-164.92.80.11:22-139.178.89.65:53262.service: Deactivated successfully. Oct 28 05:19:37.826861 systemd[1]: session-17.scope: Deactivated successfully. Oct 28 05:19:37.830000 systemd-logind[1570]: Session 17 logged out. Waiting for processes to exit. Oct 28 05:19:37.839323 systemd[1]: Started sshd@17-164.92.80.11:22-139.178.89.65:53266.service - OpenSSH per-connection server daemon (139.178.89.65:53266). Oct 28 05:19:37.843131 systemd-logind[1570]: Removed session 17. Oct 28 05:19:37.938881 sshd[5163]: Accepted publickey for core from 139.178.89.65 port 53266 ssh2: RSA SHA256:fMMt40OOzhunw8taaYk6/dlxAvnUrQ4UxcBuxtZpXJk Oct 28 05:19:37.941136 sshd-session[5163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:19:37.949438 systemd-logind[1570]: New session 18 of user core. Oct 28 05:19:37.951911 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 28 05:19:38.065206 containerd[1596]: time="2025-10-28T05:19:38.065065385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 28 05:19:38.396196 containerd[1596]: time="2025-10-28T05:19:38.396070352Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:19:38.397290 containerd[1596]: time="2025-10-28T05:19:38.397241279Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 28 05:19:38.397400 containerd[1596]: time="2025-10-28T05:19:38.397339574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 28 05:19:38.397947 kubelet[2761]: E1028 05:19:38.397900 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 05:19:38.398310 kubelet[2761]: E1028 05:19:38.397972 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 05:19:38.398310 kubelet[2761]: E1028 05:19:38.398072 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-2jltp_calico-system(1c4104d9-7ba6-4171-8c43-b6c170ee1774): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 28 05:19:38.398310 kubelet[2761]: E1028 05:19:38.398192 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2jltp" podUID="1c4104d9-7ba6-4171-8c43-b6c170ee1774" Oct 28 05:19:38.781509 sshd[5166]: Connection closed by 139.178.89.65 port 53266 Oct 28 05:19:38.783967 sshd-session[5163]: pam_unix(sshd:session): session closed for user core Oct 28 05:19:38.802374 systemd[1]: sshd@17-164.92.80.11:22-139.178.89.65:53266.service: Deactivated successfully. Oct 28 05:19:38.808148 systemd[1]: session-18.scope: Deactivated successfully. Oct 28 05:19:38.810899 systemd-logind[1570]: Session 18 logged out. Waiting for processes to exit. Oct 28 05:19:38.818793 systemd[1]: Started sshd@18-164.92.80.11:22-139.178.89.65:53270.service - OpenSSH per-connection server daemon (139.178.89.65:53270). Oct 28 05:19:38.820548 systemd-logind[1570]: Removed session 18. Oct 28 05:19:38.960667 sshd[5180]: Accepted publickey for core from 139.178.89.65 port 53270 ssh2: RSA SHA256:fMMt40OOzhunw8taaYk6/dlxAvnUrQ4UxcBuxtZpXJk Oct 28 05:19:38.964120 sshd-session[5180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:19:38.974762 systemd-logind[1570]: New session 19 of user core. Oct 28 05:19:38.979026 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 28 05:19:39.445781 sshd[5184]: Connection closed by 139.178.89.65 port 53270 Oct 28 05:19:39.446219 sshd-session[5180]: pam_unix(sshd:session): session closed for user core Oct 28 05:19:39.461658 systemd[1]: sshd@18-164.92.80.11:22-139.178.89.65:53270.service: Deactivated successfully. Oct 28 05:19:39.466398 systemd[1]: session-19.scope: Deactivated successfully. Oct 28 05:19:39.470955 systemd-logind[1570]: Session 19 logged out. Waiting for processes to exit. Oct 28 05:19:39.475968 systemd-logind[1570]: Removed session 19. Oct 28 05:19:39.479982 systemd[1]: Started sshd@19-164.92.80.11:22-139.178.89.65:53276.service - OpenSSH per-connection server daemon (139.178.89.65:53276). Oct 28 05:19:39.602667 sshd[5194]: Accepted publickey for core from 139.178.89.65 port 53276 ssh2: RSA SHA256:fMMt40OOzhunw8taaYk6/dlxAvnUrQ4UxcBuxtZpXJk Oct 28 05:19:39.606022 sshd-session[5194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:19:39.615774 systemd-logind[1570]: New session 20 of user core. Oct 28 05:19:39.619085 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 28 05:19:39.815816 sshd[5197]: Connection closed by 139.178.89.65 port 53276 Oct 28 05:19:39.816260 sshd-session[5194]: pam_unix(sshd:session): session closed for user core Oct 28 05:19:39.824122 systemd[1]: sshd@19-164.92.80.11:22-139.178.89.65:53276.service: Deactivated successfully. Oct 28 05:19:39.827516 systemd[1]: session-20.scope: Deactivated successfully. Oct 28 05:19:39.830814 systemd-logind[1570]: Session 20 logged out. Waiting for processes to exit. Oct 28 05:19:39.837355 systemd-logind[1570]: Removed session 20. Oct 28 05:19:40.065750 containerd[1596]: time="2025-10-28T05:19:40.064456903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 05:19:40.377127 containerd[1596]: time="2025-10-28T05:19:40.377020792Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:19:40.377857 containerd[1596]: time="2025-10-28T05:19:40.377808818Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 05:19:40.377962 containerd[1596]: time="2025-10-28T05:19:40.377899846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 05:19:40.378784 kubelet[2761]: E1028 05:19:40.378718 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 05:19:40.380791 kubelet[2761]: E1028 05:19:40.378795 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 05:19:40.380791 kubelet[2761]: E1028 05:19:40.379147 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-85d779dd4f-74h24_calico-apiserver(2dffcfa9-84ba-4077-b168-bc5f5cb035a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 05:19:40.380791 kubelet[2761]: E1028 05:19:40.379363 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85d779dd4f-74h24" podUID="2dffcfa9-84ba-4077-b168-bc5f5cb035a9" Oct 28 05:19:40.381003 containerd[1596]: time="2025-10-28T05:19:40.380552702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 28 05:19:40.686396 containerd[1596]: time="2025-10-28T05:19:40.684825038Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:19:40.686396 containerd[1596]: time="2025-10-28T05:19:40.685744779Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 28 05:19:40.686396 containerd[1596]: time="2025-10-28T05:19:40.685883699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 28 05:19:40.686924 kubelet[2761]: E1028 05:19:40.686211 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 05:19:40.687179 kubelet[2761]: E1028 05:19:40.687091 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 05:19:40.687658 kubelet[2761]: E1028 05:19:40.687334 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-c4bfd986d-xlpx8_calico-system(3e214f89-6d8a-48a0-9125-ff9ee356160a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 28 05:19:40.687658 kubelet[2761]: E1028 05:19:40.687477 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c4bfd986d-xlpx8" podUID="3e214f89-6d8a-48a0-9125-ff9ee356160a" Oct 28 05:19:41.077287 containerd[1596]: time="2025-10-28T05:19:41.076847469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 05:19:41.420106 containerd[1596]: time="2025-10-28T05:19:41.419852502Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:19:41.421664 containerd[1596]: time="2025-10-28T05:19:41.420590949Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 05:19:41.421933 containerd[1596]: time="2025-10-28T05:19:41.421911065Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 05:19:41.422095 kubelet[2761]: E1028 05:19:41.422054 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 05:19:41.422432 kubelet[2761]: E1028 05:19:41.422105 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 05:19:41.422432 kubelet[2761]: E1028 05:19:41.422181 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-85d779dd4f-td9lt_calico-apiserver(5a48253c-a0f4-4a3b-ba79-be0ea189e322): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 05:19:41.422432 kubelet[2761]: E1028 05:19:41.422216 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85d779dd4f-td9lt" podUID="5a48253c-a0f4-4a3b-ba79-be0ea189e322" Oct 28 05:19:44.063915 containerd[1596]: time="2025-10-28T05:19:44.063772895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 28 05:19:44.393951 containerd[1596]: time="2025-10-28T05:19:44.393822874Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:19:44.395863 containerd[1596]: time="2025-10-28T05:19:44.395694667Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 28 05:19:44.395863 containerd[1596]: time="2025-10-28T05:19:44.395752893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 28 05:19:44.396819 kubelet[2761]: E1028 05:19:44.396179 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 05:19:44.396819 kubelet[2761]: E1028 05:19:44.396234 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 05:19:44.396819 kubelet[2761]: E1028 05:19:44.396318 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-pj4dv_calico-system(65eac80b-7114-46da-934f-c797b4afa603): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 28 05:19:44.400607 containerd[1596]: time="2025-10-28T05:19:44.400316802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 28 05:19:44.759935 containerd[1596]: time="2025-10-28T05:19:44.759694607Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:19:44.760577 containerd[1596]: time="2025-10-28T05:19:44.760451857Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 28 05:19:44.760577 containerd[1596]: time="2025-10-28T05:19:44.760546329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 28 05:19:44.761585 kubelet[2761]: E1028 05:19:44.760890 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 05:19:44.761585 kubelet[2761]: E1028 05:19:44.760943 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 05:19:44.761585 kubelet[2761]: E1028 05:19:44.761027 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-pj4dv_calico-system(65eac80b-7114-46da-934f-c797b4afa603): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 28 05:19:44.761797 kubelet[2761]: E1028 05:19:44.761069 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pj4dv" podUID="65eac80b-7114-46da-934f-c797b4afa603" Oct 28 05:19:44.838275 systemd[1]: Started sshd@20-164.92.80.11:22-139.178.89.65:53282.service - OpenSSH per-connection server daemon (139.178.89.65:53282). Oct 28 05:19:44.972875 sshd[5220]: Accepted publickey for core from 139.178.89.65 port 53282 ssh2: RSA SHA256:fMMt40OOzhunw8taaYk6/dlxAvnUrQ4UxcBuxtZpXJk Oct 28 05:19:44.977180 sshd-session[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:19:44.987334 systemd-logind[1570]: New session 21 of user core. Oct 28 05:19:44.994078 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 28 05:19:45.222114 sshd[5224]: Connection closed by 139.178.89.65 port 53282 Oct 28 05:19:45.222888 sshd-session[5220]: pam_unix(sshd:session): session closed for user core Oct 28 05:19:45.228963 systemd-logind[1570]: Session 21 logged out. Waiting for processes to exit. Oct 28 05:19:45.229803 systemd[1]: sshd@20-164.92.80.11:22-139.178.89.65:53282.service: Deactivated successfully. Oct 28 05:19:45.233545 systemd[1]: session-21.scope: Deactivated successfully. Oct 28 05:19:45.238836 systemd-logind[1570]: Removed session 21. Oct 28 05:19:48.065014 kubelet[2761]: E1028 05:19:48.064954 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64c845d8b5-xc7c2" podUID="a4214a7a-e8c1-4291-b44c-d7574f9e0241" Oct 28 05:19:50.068121 kubelet[2761]: E1028 05:19:50.068079 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 28 05:19:50.238951 systemd[1]: Started sshd@21-164.92.80.11:22-139.178.89.65:54516.service - OpenSSH per-connection server daemon (139.178.89.65:54516). Oct 28 05:19:50.313663 sshd[5238]: Accepted publickey for core from 139.178.89.65 port 54516 ssh2: RSA SHA256:fMMt40OOzhunw8taaYk6/dlxAvnUrQ4UxcBuxtZpXJk Oct 28 05:19:50.314703 sshd-session[5238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:19:50.322607 systemd-logind[1570]: New session 22 of user core. Oct 28 05:19:50.328518 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 28 05:19:50.449002 sshd[5241]: Connection closed by 139.178.89.65 port 54516 Oct 28 05:19:50.450227 sshd-session[5238]: pam_unix(sshd:session): session closed for user core Oct 28 05:19:50.458597 systemd[1]: sshd@21-164.92.80.11:22-139.178.89.65:54516.service: Deactivated successfully. Oct 28 05:19:50.461958 systemd[1]: session-22.scope: Deactivated successfully. Oct 28 05:19:50.463525 systemd-logind[1570]: Session 22 logged out. Waiting for processes to exit. Oct 28 05:19:50.465587 systemd-logind[1570]: Removed session 22. Oct 28 05:19:51.067912 kubelet[2761]: E1028 05:19:51.067733 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85d779dd4f-74h24" podUID="2dffcfa9-84ba-4077-b168-bc5f5cb035a9" Oct 28 05:19:52.063741 kubelet[2761]: E1028 05:19:52.063687 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c4bfd986d-xlpx8" podUID="3e214f89-6d8a-48a0-9125-ff9ee356160a" Oct 28 05:19:52.064507 kubelet[2761]: E1028 05:19:52.063802 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2jltp" podUID="1c4104d9-7ba6-4171-8c43-b6c170ee1774" Oct 28 05:19:55.068237 kubelet[2761]: E1028 05:19:55.066490 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85d779dd4f-td9lt" podUID="5a48253c-a0f4-4a3b-ba79-be0ea189e322" Oct 28 05:19:55.468094 systemd[1]: Started sshd@22-164.92.80.11:22-139.178.89.65:54526.service - OpenSSH per-connection server daemon (139.178.89.65:54526). Oct 28 05:19:55.544747 sshd[5253]: Accepted publickey for core from 139.178.89.65 port 54526 ssh2: RSA SHA256:fMMt40OOzhunw8taaYk6/dlxAvnUrQ4UxcBuxtZpXJk Oct 28 05:19:55.547363 sshd-session[5253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:19:55.556581 systemd-logind[1570]: New session 23 of user core. Oct 28 05:19:55.559896 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 28 05:19:55.662977 sshd[5257]: Connection closed by 139.178.89.65 port 54526 Oct 28 05:19:55.663860 sshd-session[5253]: pam_unix(sshd:session): session closed for user core Oct 28 05:19:55.672552 systemd[1]: sshd@22-164.92.80.11:22-139.178.89.65:54526.service: Deactivated successfully. Oct 28 05:19:55.677129 systemd[1]: session-23.scope: Deactivated successfully. Oct 28 05:19:55.679964 systemd-logind[1570]: Session 23 logged out. Waiting for processes to exit. Oct 28 05:19:55.682686 systemd-logind[1570]: Removed session 23. Oct 28 05:19:57.029957 containerd[1596]: time="2025-10-28T05:19:57.029896809Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4db498e89a2197db3158e2b4616e41b8baf8f99041d1c4c335b4618cddff042f\" id:\"1a4b0d60afe3849510ca3c5efc9cb16c2b9bc4d20b9184d76aabc888c50ba8f6\" pid:5280 exited_at:{seconds:1761628797 nanos:29370661}"