Oct 30 00:03:49.952783 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Oct 29 22:07:32 -00 2025 Oct 30 00:03:49.952825 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=e5fe4ef982f4bbc75df9f63e805c4ec086c6d95878919f55fe8c638c4d2b3b13 Oct 30 00:03:49.952845 kernel: BIOS-provided physical RAM map: Oct 30 00:03:49.952856 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 30 00:03:49.952866 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 30 00:03:49.952876 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 30 00:03:49.952888 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Oct 30 00:03:49.952906 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Oct 30 00:03:49.952916 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 30 00:03:49.952927 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 30 00:03:49.952942 kernel: NX (Execute Disable) protection: active Oct 30 00:03:49.952953 kernel: APIC: Static calls initialized Oct 30 00:03:49.952964 kernel: SMBIOS 2.8 present. Oct 30 00:03:49.952977 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Oct 30 00:03:49.952992 kernel: DMI: Memory slots populated: 1/1 Oct 30 00:03:49.953004 kernel: Hypervisor detected: KVM Oct 30 00:03:49.953024 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Oct 30 00:03:49.953035 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 30 00:03:49.953048 kernel: kvm-clock: using sched offset of 5289993897 cycles Oct 30 00:03:49.953061 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 30 00:03:49.953074 kernel: tsc: Detected 1995.307 MHz processor Oct 30 00:03:49.953087 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 30 00:03:49.953117 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 30 00:03:49.953130 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Oct 30 00:03:49.953144 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 30 00:03:49.953160 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 30 00:03:49.953173 kernel: ACPI: Early table checksum verification disabled Oct 30 00:03:49.953184 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Oct 30 00:03:49.953196 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 00:03:49.953209 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 00:03:49.953221 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 00:03:49.953234 kernel: ACPI: FACS 0x000000007FFE0000 000040 Oct 30 00:03:49.953247 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 00:03:49.953260 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 00:03:49.953276 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 00:03:49.953289 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 00:03:49.953302 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Oct 30 00:03:49.953315 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Oct 30 00:03:49.953326 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Oct 30 00:03:49.953338 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Oct 30 00:03:49.953357 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Oct 30 00:03:49.953373 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Oct 30 00:03:49.953387 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Oct 30 00:03:49.953400 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 30 00:03:49.953414 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 30 00:03:49.953428 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Oct 30 00:03:49.953441 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Oct 30 00:03:49.953453 kernel: Zone ranges: Oct 30 00:03:49.953470 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 30 00:03:49.953561 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Oct 30 00:03:49.953575 kernel: Normal empty Oct 30 00:03:49.953590 kernel: Device empty Oct 30 00:03:49.953602 kernel: Movable zone start for each node Oct 30 00:03:49.953614 kernel: Early memory node ranges Oct 30 00:03:49.953625 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 30 00:03:49.953636 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Oct 30 00:03:49.953649 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Oct 30 00:03:49.953660 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 30 00:03:49.953678 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 30 00:03:49.953690 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Oct 30 00:03:49.953701 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 30 00:03:49.953718 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 30 00:03:49.953730 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 30 00:03:49.953747 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 30 00:03:49.953759 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 30 00:03:49.953771 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 30 00:03:49.953788 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 30 00:03:49.953805 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 30 00:03:49.953819 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 30 00:03:49.953834 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 30 00:03:49.953881 kernel: TSC deadline timer available Oct 30 00:03:49.953890 kernel: CPU topo: Max. logical packages: 1 Oct 30 00:03:49.953898 kernel: CPU topo: Max. logical dies: 1 Oct 30 00:03:49.953905 kernel: CPU topo: Max. dies per package: 1 Oct 30 00:03:49.953913 kernel: CPU topo: Max. threads per core: 1 Oct 30 00:03:49.953921 kernel: CPU topo: Num. cores per package: 2 Oct 30 00:03:49.953933 kernel: CPU topo: Num. threads per package: 2 Oct 30 00:03:49.953944 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Oct 30 00:03:49.953963 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 30 00:03:49.953974 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Oct 30 00:03:49.953985 kernel: Booting paravirtualized kernel on KVM Oct 30 00:03:49.953999 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 30 00:03:49.954010 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 30 00:03:49.954022 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Oct 30 00:03:49.954033 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Oct 30 00:03:49.954045 kernel: pcpu-alloc: [0] 0 1 Oct 30 00:03:49.954060 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 30 00:03:49.954074 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=e5fe4ef982f4bbc75df9f63e805c4ec086c6d95878919f55fe8c638c4d2b3b13 Oct 30 00:03:49.954087 kernel: random: crng init done Oct 30 00:03:49.956242 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 30 00:03:49.956265 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 30 00:03:49.956276 kernel: Fallback order for Node 0: 0 Oct 30 00:03:49.956286 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Oct 30 00:03:49.956295 kernel: Policy zone: DMA32 Oct 30 00:03:49.956312 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 30 00:03:49.956320 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 30 00:03:49.956328 kernel: Kernel/User page tables isolation: enabled Oct 30 00:03:49.956336 kernel: ftrace: allocating 40021 entries in 157 pages Oct 30 00:03:49.956344 kernel: ftrace: allocated 157 pages with 5 groups Oct 30 00:03:49.956352 kernel: Dynamic Preempt: voluntary Oct 30 00:03:49.956360 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 30 00:03:49.956369 kernel: rcu: RCU event tracing is enabled. Oct 30 00:03:49.956378 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 30 00:03:49.956389 kernel: Trampoline variant of Tasks RCU enabled. Oct 30 00:03:49.956397 kernel: Rude variant of Tasks RCU enabled. Oct 30 00:03:49.956404 kernel: Tracing variant of Tasks RCU enabled. Oct 30 00:03:49.956413 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 30 00:03:49.956421 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 30 00:03:49.956429 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 30 00:03:49.956444 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 30 00:03:49.956452 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 30 00:03:49.956460 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 30 00:03:49.956471 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 30 00:03:49.956478 kernel: Console: colour VGA+ 80x25 Oct 30 00:03:49.956486 kernel: printk: legacy console [tty0] enabled Oct 30 00:03:49.956494 kernel: printk: legacy console [ttyS0] enabled Oct 30 00:03:49.956502 kernel: ACPI: Core revision 20240827 Oct 30 00:03:49.956510 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 30 00:03:49.956528 kernel: APIC: Switch to symmetric I/O mode setup Oct 30 00:03:49.956539 kernel: x2apic enabled Oct 30 00:03:49.956547 kernel: APIC: Switched APIC routing to: physical x2apic Oct 30 00:03:49.956556 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 30 00:03:49.956564 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985ba32100, max_idle_ns: 881590654722 ns Oct 30 00:03:49.956578 kernel: Calibrating delay loop (skipped) preset value.. 3990.61 BogoMIPS (lpj=1995307) Oct 30 00:03:49.956589 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 30 00:03:49.956597 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 30 00:03:49.956606 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 30 00:03:49.956615 kernel: Spectre V2 : Mitigation: Retpolines Oct 30 00:03:49.956623 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 30 00:03:49.956634 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Oct 30 00:03:49.956643 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 30 00:03:49.956651 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 30 00:03:49.956659 kernel: MDS: Mitigation: Clear CPU buffers Oct 30 00:03:49.956668 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 30 00:03:49.956676 kernel: active return thunk: its_return_thunk Oct 30 00:03:49.956685 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 30 00:03:49.956693 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 30 00:03:49.956705 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 30 00:03:49.956713 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 30 00:03:49.956721 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 30 00:03:49.956730 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 30 00:03:49.956739 kernel: Freeing SMP alternatives memory: 32K Oct 30 00:03:49.956747 kernel: pid_max: default: 32768 minimum: 301 Oct 30 00:03:49.956755 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 30 00:03:49.956764 kernel: landlock: Up and running. Oct 30 00:03:49.956772 kernel: SELinux: Initializing. Oct 30 00:03:49.956783 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 30 00:03:49.956791 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 30 00:03:49.956800 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Oct 30 00:03:49.956808 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Oct 30 00:03:49.956817 kernel: signal: max sigframe size: 1776 Oct 30 00:03:49.956826 kernel: rcu: Hierarchical SRCU implementation. Oct 30 00:03:49.956835 kernel: rcu: Max phase no-delay instances is 400. Oct 30 00:03:49.956843 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 30 00:03:49.956851 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 30 00:03:49.956863 kernel: smp: Bringing up secondary CPUs ... Oct 30 00:03:49.956875 kernel: smpboot: x86: Booting SMP configuration: Oct 30 00:03:49.956883 kernel: .... node #0, CPUs: #1 Oct 30 00:03:49.956892 kernel: smp: Brought up 1 node, 2 CPUs Oct 30 00:03:49.956900 kernel: smpboot: Total of 2 processors activated (7981.22 BogoMIPS) Oct 30 00:03:49.956910 kernel: Memory: 1960764K/2096612K available (14336K kernel code, 2436K rwdata, 26048K rodata, 45544K init, 1184K bss, 131284K reserved, 0K cma-reserved) Oct 30 00:03:49.956918 kernel: devtmpfs: initialized Oct 30 00:03:49.956926 kernel: x86/mm: Memory block size: 128MB Oct 30 00:03:49.956935 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 30 00:03:49.956946 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 30 00:03:49.956955 kernel: pinctrl core: initialized pinctrl subsystem Oct 30 00:03:49.956963 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 30 00:03:49.956971 kernel: audit: initializing netlink subsys (disabled) Oct 30 00:03:49.956980 kernel: audit: type=2000 audit(1761782625.781:1): state=initialized audit_enabled=0 res=1 Oct 30 00:03:49.956988 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 30 00:03:49.956996 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 30 00:03:49.957005 kernel: cpuidle: using governor menu Oct 30 00:03:49.957013 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 30 00:03:49.957024 kernel: dca service started, version 1.12.1 Oct 30 00:03:49.957033 kernel: PCI: Using configuration type 1 for base access Oct 30 00:03:49.957041 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 30 00:03:49.957050 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 30 00:03:49.957058 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 30 00:03:49.957067 kernel: ACPI: Added _OSI(Module Device) Oct 30 00:03:49.957075 kernel: ACPI: Added _OSI(Processor Device) Oct 30 00:03:49.957084 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 30 00:03:49.957092 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 30 00:03:49.957117 kernel: ACPI: Interpreter enabled Oct 30 00:03:49.957125 kernel: ACPI: PM: (supports S0 S5) Oct 30 00:03:49.957133 kernel: ACPI: Using IOAPIC for interrupt routing Oct 30 00:03:49.957142 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 30 00:03:49.957150 kernel: PCI: Using E820 reservations for host bridge windows Oct 30 00:03:49.957159 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 30 00:03:49.957167 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 30 00:03:49.957423 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 30 00:03:49.957591 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Oct 30 00:03:49.957692 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Oct 30 00:03:49.957704 kernel: acpiphp: Slot [3] registered Oct 30 00:03:49.957713 kernel: acpiphp: Slot [4] registered Oct 30 00:03:49.957722 kernel: acpiphp: Slot [5] registered Oct 30 00:03:49.957731 kernel: acpiphp: Slot [6] registered Oct 30 00:03:49.957739 kernel: acpiphp: Slot [7] registered Oct 30 00:03:49.957748 kernel: acpiphp: Slot [8] registered Oct 30 00:03:49.957756 kernel: acpiphp: Slot [9] registered Oct 30 00:03:49.957770 kernel: acpiphp: Slot [10] registered Oct 30 00:03:49.957779 kernel: acpiphp: Slot [11] registered Oct 30 00:03:49.957787 kernel: acpiphp: Slot [12] registered Oct 30 00:03:49.957795 kernel: acpiphp: Slot [13] registered Oct 30 00:03:49.957804 kernel: acpiphp: Slot [14] registered Oct 30 00:03:49.957812 kernel: acpiphp: Slot [15] registered Oct 30 00:03:49.957820 kernel: acpiphp: Slot [16] registered Oct 30 00:03:49.957828 kernel: acpiphp: Slot [17] registered Oct 30 00:03:49.957836 kernel: acpiphp: Slot [18] registered Oct 30 00:03:49.957848 kernel: acpiphp: Slot [19] registered Oct 30 00:03:49.957856 kernel: acpiphp: Slot [20] registered Oct 30 00:03:49.957864 kernel: acpiphp: Slot [21] registered Oct 30 00:03:49.957873 kernel: acpiphp: Slot [22] registered Oct 30 00:03:49.957881 kernel: acpiphp: Slot [23] registered Oct 30 00:03:49.957890 kernel: acpiphp: Slot [24] registered Oct 30 00:03:49.957898 kernel: acpiphp: Slot [25] registered Oct 30 00:03:49.957907 kernel: acpiphp: Slot [26] registered Oct 30 00:03:49.957915 kernel: acpiphp: Slot [27] registered Oct 30 00:03:49.957926 kernel: acpiphp: Slot [28] registered Oct 30 00:03:49.957935 kernel: acpiphp: Slot [29] registered Oct 30 00:03:49.957943 kernel: acpiphp: Slot [30] registered Oct 30 00:03:49.957951 kernel: acpiphp: Slot [31] registered Oct 30 00:03:49.957960 kernel: PCI host bridge to bus 0000:00 Oct 30 00:03:49.958092 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 30 00:03:49.958224 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 30 00:03:49.958326 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 30 00:03:49.958415 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Oct 30 00:03:49.958496 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Oct 30 00:03:49.958577 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 30 00:03:49.958731 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Oct 30 00:03:49.958849 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Oct 30 00:03:49.958957 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Oct 30 00:03:49.959053 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Oct 30 00:03:49.959179 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Oct 30 00:03:49.959270 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Oct 30 00:03:49.959361 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Oct 30 00:03:49.959451 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Oct 30 00:03:49.959574 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Oct 30 00:03:49.959665 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Oct 30 00:03:49.959772 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Oct 30 00:03:49.959863 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 30 00:03:49.959952 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 30 00:03:49.960077 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Oct 30 00:03:49.960209 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Oct 30 00:03:49.960306 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Oct 30 00:03:49.960402 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Oct 30 00:03:49.960493 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Oct 30 00:03:49.960584 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 30 00:03:49.960696 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 30 00:03:49.960811 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Oct 30 00:03:49.960936 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Oct 30 00:03:49.961055 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Oct 30 00:03:49.961202 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 30 00:03:49.961298 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Oct 30 00:03:49.961433 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Oct 30 00:03:49.961548 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Oct 30 00:03:49.961721 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Oct 30 00:03:49.961818 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Oct 30 00:03:49.961912 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Oct 30 00:03:49.962009 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Oct 30 00:03:49.962144 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 30 00:03:49.962237 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Oct 30 00:03:49.962352 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Oct 30 00:03:49.962463 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Oct 30 00:03:49.962574 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 30 00:03:49.962668 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Oct 30 00:03:49.962766 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Oct 30 00:03:49.962856 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Oct 30 00:03:49.962966 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Oct 30 00:03:49.963057 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Oct 30 00:03:49.963168 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Oct 30 00:03:49.963180 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 30 00:03:49.963192 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 30 00:03:49.963201 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 30 00:03:49.963210 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 30 00:03:49.963218 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 30 00:03:49.963227 kernel: iommu: Default domain type: Translated Oct 30 00:03:49.963235 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 30 00:03:49.963244 kernel: PCI: Using ACPI for IRQ routing Oct 30 00:03:49.963252 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 30 00:03:49.963261 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 30 00:03:49.963272 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Oct 30 00:03:49.963393 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 30 00:03:49.963514 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 30 00:03:49.963606 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 30 00:03:49.963617 kernel: vgaarb: loaded Oct 30 00:03:49.963626 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 30 00:03:49.963635 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 30 00:03:49.963643 kernel: clocksource: Switched to clocksource kvm-clock Oct 30 00:03:49.963652 kernel: VFS: Disk quotas dquot_6.6.0 Oct 30 00:03:49.963665 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 30 00:03:49.963674 kernel: pnp: PnP ACPI init Oct 30 00:03:49.963683 kernel: pnp: PnP ACPI: found 4 devices Oct 30 00:03:49.963692 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 30 00:03:49.963701 kernel: NET: Registered PF_INET protocol family Oct 30 00:03:49.963709 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 30 00:03:49.963718 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 30 00:03:49.963727 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 30 00:03:49.963735 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 30 00:03:49.963747 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 30 00:03:49.963756 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 30 00:03:49.963764 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 30 00:03:49.963773 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 30 00:03:49.963782 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 30 00:03:49.963790 kernel: NET: Registered PF_XDP protocol family Oct 30 00:03:49.963882 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 30 00:03:49.963967 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 30 00:03:49.964140 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 30 00:03:49.964284 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Oct 30 00:03:49.964373 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Oct 30 00:03:49.964477 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 30 00:03:49.964576 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 30 00:03:49.964589 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 30 00:03:49.964724 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 28198 usecs Oct 30 00:03:49.964738 kernel: PCI: CLS 0 bytes, default 64 Oct 30 00:03:49.964747 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 30 00:03:49.964761 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985ba32100, max_idle_ns: 881590654722 ns Oct 30 00:03:49.964770 kernel: Initialise system trusted keyrings Oct 30 00:03:49.964779 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 30 00:03:49.964789 kernel: Key type asymmetric registered Oct 30 00:03:49.964803 kernel: Asymmetric key parser 'x509' registered Oct 30 00:03:49.964816 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 30 00:03:49.964829 kernel: io scheduler mq-deadline registered Oct 30 00:03:49.964844 kernel: io scheduler kyber registered Oct 30 00:03:49.964856 kernel: io scheduler bfq registered Oct 30 00:03:49.964865 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 30 00:03:49.964874 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Oct 30 00:03:49.964883 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 30 00:03:49.964891 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 30 00:03:49.964900 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 30 00:03:49.964908 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 30 00:03:49.964917 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 30 00:03:49.964925 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 30 00:03:49.964934 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 30 00:03:49.965080 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 30 00:03:49.965093 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 30 00:03:49.965200 kernel: rtc_cmos 00:03: registered as rtc0 Oct 30 00:03:49.965284 kernel: rtc_cmos 00:03: setting system clock to 2025-10-30T00:03:49 UTC (1761782629) Oct 30 00:03:49.965368 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Oct 30 00:03:49.965379 kernel: intel_pstate: CPU model not supported Oct 30 00:03:49.965387 kernel: NET: Registered PF_INET6 protocol family Oct 30 00:03:49.966170 kernel: Segment Routing with IPv6 Oct 30 00:03:49.966191 kernel: In-situ OAM (IOAM) with IPv6 Oct 30 00:03:49.966203 kernel: NET: Registered PF_PACKET protocol family Oct 30 00:03:49.966216 kernel: Key type dns_resolver registered Oct 30 00:03:49.966228 kernel: IPI shorthand broadcast: enabled Oct 30 00:03:49.966240 kernel: sched_clock: Marking stable (3595004227, 232533458)->(4033146477, -205608792) Oct 30 00:03:49.966252 kernel: registered taskstats version 1 Oct 30 00:03:49.966268 kernel: Loading compiled-in X.509 certificates Oct 30 00:03:49.966282 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 815fc40077fbc06b8d9e8a6016fea83aecff0a2a' Oct 30 00:03:49.966301 kernel: Demotion targets for Node 0: null Oct 30 00:03:49.966313 kernel: Key type .fscrypt registered Oct 30 00:03:49.966326 kernel: Key type fscrypt-provisioning registered Oct 30 00:03:49.966357 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 30 00:03:49.966368 kernel: ima: Allocated hash algorithm: sha1 Oct 30 00:03:49.966380 kernel: ima: No architecture policies found Oct 30 00:03:49.966389 kernel: clk: Disabling unused clocks Oct 30 00:03:49.966398 kernel: Warning: unable to open an initial console. Oct 30 00:03:49.966407 kernel: Freeing unused kernel image (initmem) memory: 45544K Oct 30 00:03:49.966418 kernel: Write protecting the kernel read-only data: 40960k Oct 30 00:03:49.966427 kernel: Freeing unused kernel image (rodata/data gap) memory: 576K Oct 30 00:03:49.966436 kernel: Run /init as init process Oct 30 00:03:49.966445 kernel: with arguments: Oct 30 00:03:49.966454 kernel: /init Oct 30 00:03:49.966463 kernel: with environment: Oct 30 00:03:49.966471 kernel: HOME=/ Oct 30 00:03:49.966480 kernel: TERM=linux Oct 30 00:03:49.966491 systemd[1]: Successfully made /usr/ read-only. Oct 30 00:03:49.966507 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 30 00:03:49.966517 systemd[1]: Detected virtualization kvm. Oct 30 00:03:49.966526 systemd[1]: Detected architecture x86-64. Oct 30 00:03:49.966535 systemd[1]: Running in initrd. Oct 30 00:03:49.966544 systemd[1]: No hostname configured, using default hostname. Oct 30 00:03:49.966554 systemd[1]: Hostname set to . Oct 30 00:03:49.966563 systemd[1]: Initializing machine ID from VM UUID. Oct 30 00:03:49.966574 systemd[1]: Queued start job for default target initrd.target. Oct 30 00:03:49.966583 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 00:03:49.966593 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 00:03:49.966603 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 30 00:03:49.966613 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 30 00:03:49.966622 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 30 00:03:49.966635 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 30 00:03:49.966646 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 30 00:03:49.966655 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 30 00:03:49.966664 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 00:03:49.966674 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 30 00:03:49.966683 systemd[1]: Reached target paths.target - Path Units. Oct 30 00:03:49.966694 systemd[1]: Reached target slices.target - Slice Units. Oct 30 00:03:49.966704 systemd[1]: Reached target swap.target - Swaps. Oct 30 00:03:49.966713 systemd[1]: Reached target timers.target - Timer Units. Oct 30 00:03:49.966722 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 30 00:03:49.966731 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 30 00:03:49.966740 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 30 00:03:49.966749 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 30 00:03:49.966758 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 30 00:03:49.966768 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 30 00:03:49.966780 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 00:03:49.966790 systemd[1]: Reached target sockets.target - Socket Units. Oct 30 00:03:49.966799 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 30 00:03:49.966808 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 30 00:03:49.966818 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 30 00:03:49.966827 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 30 00:03:49.966836 systemd[1]: Starting systemd-fsck-usr.service... Oct 30 00:03:49.966846 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 30 00:03:49.966857 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 30 00:03:49.966867 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:03:49.966876 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 30 00:03:49.966886 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 00:03:49.966895 systemd[1]: Finished systemd-fsck-usr.service. Oct 30 00:03:49.966963 systemd-journald[193]: Collecting audit messages is disabled. Oct 30 00:03:49.966996 systemd-journald[193]: Journal started Oct 30 00:03:49.967032 systemd-journald[193]: Runtime Journal (/run/log/journal/cedcf26f30b346b49c026f6a82ddd835) is 4.9M, max 39.2M, 34.3M free. Oct 30 00:03:49.940377 systemd-modules-load[194]: Inserted module 'overlay' Oct 30 00:03:49.974202 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 30 00:03:49.982783 systemd[1]: Started systemd-journald.service - Journal Service. Oct 30 00:03:49.987180 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 30 00:03:49.991240 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 30 00:03:50.070862 kernel: Bridge firewalling registered Oct 30 00:03:49.996744 systemd-modules-load[194]: Inserted module 'br_netfilter' Oct 30 00:03:50.072675 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 30 00:03:50.076743 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:03:50.078859 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 30 00:03:50.083531 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 30 00:03:50.086029 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 00:03:50.092029 systemd-tmpfiles[206]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 30 00:03:50.093338 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 30 00:03:50.104559 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 00:03:50.120152 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 00:03:50.121677 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 00:03:50.131364 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 30 00:03:50.140221 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 00:03:50.147689 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 30 00:03:50.180213 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=e5fe4ef982f4bbc75df9f63e805c4ec086c6d95878919f55fe8c638c4d2b3b13 Oct 30 00:03:50.195446 systemd-resolved[230]: Positive Trust Anchors: Oct 30 00:03:50.195466 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 30 00:03:50.195519 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 30 00:03:50.200296 systemd-resolved[230]: Defaulting to hostname 'linux'. Oct 30 00:03:50.201692 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 30 00:03:50.204685 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 30 00:03:50.303191 kernel: SCSI subsystem initialized Oct 30 00:03:50.315155 kernel: Loading iSCSI transport class v2.0-870. Oct 30 00:03:50.329162 kernel: iscsi: registered transport (tcp) Oct 30 00:03:50.355471 kernel: iscsi: registered transport (qla4xxx) Oct 30 00:03:50.355587 kernel: QLogic iSCSI HBA Driver Oct 30 00:03:50.383380 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 30 00:03:50.420472 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 00:03:50.423198 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 30 00:03:50.477316 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 30 00:03:50.480614 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 30 00:03:50.543167 kernel: raid6: avx2x4 gen() 27622 MB/s Oct 30 00:03:50.561148 kernel: raid6: avx2x2 gen() 27316 MB/s Oct 30 00:03:50.580230 kernel: raid6: avx2x1 gen() 16605 MB/s Oct 30 00:03:50.580379 kernel: raid6: using algorithm avx2x4 gen() 27622 MB/s Oct 30 00:03:50.600090 kernel: raid6: .... xor() 8841 MB/s, rmw enabled Oct 30 00:03:50.600253 kernel: raid6: using avx2x2 recovery algorithm Oct 30 00:03:50.625135 kernel: xor: automatically using best checksumming function avx Oct 30 00:03:50.791156 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 30 00:03:50.799618 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 30 00:03:50.803534 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 00:03:50.834315 systemd-udevd[443]: Using default interface naming scheme 'v255'. Oct 30 00:03:50.841649 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 00:03:50.846320 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 30 00:03:50.871808 dracut-pre-trigger[450]: rd.md=0: removing MD RAID activation Oct 30 00:03:50.907398 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 30 00:03:50.910631 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 30 00:03:51.008111 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 00:03:51.012992 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 30 00:03:51.085135 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Oct 30 00:03:51.093126 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Oct 30 00:03:51.099318 kernel: scsi host0: Virtio SCSI HBA Oct 30 00:03:51.121276 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Oct 30 00:03:51.134203 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 30 00:03:51.156904 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 30 00:03:51.156970 kernel: GPT:9289727 != 125829119 Oct 30 00:03:51.156982 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 30 00:03:51.160272 kernel: GPT:9289727 != 125829119 Oct 30 00:03:51.160344 kernel: cryptd: max_cpu_qlen set to 1000 Oct 30 00:03:51.160359 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 30 00:03:51.163642 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 30 00:03:51.178130 kernel: AES CTR mode by8 optimization enabled Oct 30 00:03:51.205122 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Oct 30 00:03:51.224234 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Oct 30 00:03:51.234685 kernel: ACPI: bus type USB registered Oct 30 00:03:51.233634 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 00:03:51.233842 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:03:51.236350 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:03:51.239127 kernel: libata version 3.00 loaded. Oct 30 00:03:51.241517 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:03:51.247336 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 30 00:03:51.256138 kernel: usbcore: registered new interface driver usbfs Oct 30 00:03:51.260224 kernel: usbcore: registered new interface driver hub Oct 30 00:03:51.265143 kernel: usbcore: registered new device driver usb Oct 30 00:03:51.278164 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 30 00:03:51.287977 kernel: scsi host1: ata_piix Oct 30 00:03:51.304132 kernel: scsi host2: ata_piix Oct 30 00:03:51.316404 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Oct 30 00:03:51.316481 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Oct 30 00:03:51.327952 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 30 00:03:51.422782 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Oct 30 00:03:51.423041 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Oct 30 00:03:51.423196 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Oct 30 00:03:51.423323 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Oct 30 00:03:51.423501 kernel: hub 1-0:1.0: USB hub found Oct 30 00:03:51.423717 kernel: hub 1-0:1.0: 2 ports detected Oct 30 00:03:51.422254 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:03:51.445939 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 30 00:03:51.457779 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 30 00:03:51.458869 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 30 00:03:51.475913 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 30 00:03:51.481311 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 30 00:03:51.494843 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 30 00:03:51.497979 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 30 00:03:51.500235 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 00:03:51.502477 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 30 00:03:51.504933 disk-uuid[593]: Primary Header is updated. Oct 30 00:03:51.504933 disk-uuid[593]: Secondary Entries is updated. Oct 30 00:03:51.504933 disk-uuid[593]: Secondary Header is updated. Oct 30 00:03:51.506665 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 30 00:03:51.513187 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 30 00:03:51.519135 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 30 00:03:51.539960 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 30 00:03:52.521307 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 30 00:03:52.521848 disk-uuid[596]: The operation has completed successfully. Oct 30 00:03:52.567643 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 30 00:03:52.568868 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 30 00:03:52.604091 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 30 00:03:52.631151 sh[615]: Success Oct 30 00:03:52.658027 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 30 00:03:52.658166 kernel: device-mapper: uevent: version 1.0.3 Oct 30 00:03:52.661348 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 30 00:03:52.676343 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Oct 30 00:03:52.743790 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 30 00:03:52.747218 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 30 00:03:52.760748 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 30 00:03:52.773783 kernel: BTRFS: device fsid ad8523d8-35e6-44b9-a685-e8d871101da4 devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (627) Oct 30 00:03:52.773854 kernel: BTRFS info (device dm-0): first mount of filesystem ad8523d8-35e6-44b9-a685-e8d871101da4 Oct 30 00:03:52.776460 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 30 00:03:52.787126 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 30 00:03:52.787240 kernel: BTRFS info (device dm-0): enabling free space tree Oct 30 00:03:52.789619 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 30 00:03:52.791249 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 30 00:03:52.792356 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 30 00:03:52.793356 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 30 00:03:52.799312 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 30 00:03:52.827164 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (658) Oct 30 00:03:52.831174 kernel: BTRFS info (device vda6): first mount of filesystem 20cadb25-62ee-49b8-9ff8-7ba27828b77e Oct 30 00:03:52.835174 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 30 00:03:52.842787 kernel: BTRFS info (device vda6): turning on async discard Oct 30 00:03:52.842891 kernel: BTRFS info (device vda6): enabling free space tree Oct 30 00:03:52.851293 kernel: BTRFS info (device vda6): last unmount of filesystem 20cadb25-62ee-49b8-9ff8-7ba27828b77e Oct 30 00:03:52.852493 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 30 00:03:52.855525 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 30 00:03:52.944272 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 30 00:03:52.951231 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 30 00:03:53.009790 systemd-networkd[798]: lo: Link UP Oct 30 00:03:53.010829 systemd-networkd[798]: lo: Gained carrier Oct 30 00:03:53.014237 systemd-networkd[798]: Enumeration completed Oct 30 00:03:53.014625 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Oct 30 00:03:53.014629 systemd-networkd[798]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Oct 30 00:03:53.018296 systemd-networkd[798]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 00:03:53.018300 systemd-networkd[798]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 30 00:03:53.019854 systemd-networkd[798]: eth0: Link UP Oct 30 00:03:53.020794 systemd-networkd[798]: eth1: Link UP Oct 30 00:03:53.021053 systemd-networkd[798]: eth0: Gained carrier Oct 30 00:03:53.021073 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Oct 30 00:03:53.021588 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 30 00:03:53.022593 systemd[1]: Reached target network.target - Network. Oct 30 00:03:53.027933 systemd-networkd[798]: eth1: Gained carrier Oct 30 00:03:53.027960 systemd-networkd[798]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 00:03:53.044951 systemd-networkd[798]: eth0: DHCPv4 address 147.182.197.56/20, gateway 147.182.192.1 acquired from 169.254.169.253 Oct 30 00:03:53.061315 systemd-networkd[798]: eth1: DHCPv4 address 10.124.0.28/20 acquired from 169.254.169.253 Oct 30 00:03:53.131439 ignition[703]: Ignition 2.22.0 Oct 30 00:03:53.131454 ignition[703]: Stage: fetch-offline Oct 30 00:03:53.131512 ignition[703]: no configs at "/usr/lib/ignition/base.d" Oct 30 00:03:53.136328 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 30 00:03:53.131523 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 30 00:03:53.131691 ignition[703]: parsed url from cmdline: "" Oct 30 00:03:53.131695 ignition[703]: no config URL provided Oct 30 00:03:53.131701 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Oct 30 00:03:53.141323 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 30 00:03:53.131709 ignition[703]: no config at "/usr/lib/ignition/user.ign" Oct 30 00:03:53.131719 ignition[703]: failed to fetch config: resource requires networking Oct 30 00:03:53.133052 ignition[703]: Ignition finished successfully Oct 30 00:03:53.202584 ignition[808]: Ignition 2.22.0 Oct 30 00:03:53.202606 ignition[808]: Stage: fetch Oct 30 00:03:53.202840 ignition[808]: no configs at "/usr/lib/ignition/base.d" Oct 30 00:03:53.202856 ignition[808]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 30 00:03:53.202989 ignition[808]: parsed url from cmdline: "" Oct 30 00:03:53.202995 ignition[808]: no config URL provided Oct 30 00:03:53.203006 ignition[808]: reading system config file "/usr/lib/ignition/user.ign" Oct 30 00:03:53.203019 ignition[808]: no config at "/usr/lib/ignition/user.ign" Oct 30 00:03:53.203063 ignition[808]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Oct 30 00:03:53.223258 ignition[808]: GET result: OK Oct 30 00:03:53.224718 ignition[808]: parsing config with SHA512: 12a8887398025b879ad9751d7a8e20e203653f180a0f5f4a4667551ca33739be2d0d8e1060c13382526eadc1609840502f72263b13482f3f00b737def78b4d8c Oct 30 00:03:53.233842 unknown[808]: fetched base config from "system" Oct 30 00:03:53.233855 unknown[808]: fetched base config from "system" Oct 30 00:03:53.234695 ignition[808]: fetch: fetch complete Oct 30 00:03:53.233863 unknown[808]: fetched user config from "digitalocean" Oct 30 00:03:53.234722 ignition[808]: fetch: fetch passed Oct 30 00:03:53.239123 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 30 00:03:53.234799 ignition[808]: Ignition finished successfully Oct 30 00:03:53.242752 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 30 00:03:53.286381 ignition[814]: Ignition 2.22.0 Oct 30 00:03:53.286397 ignition[814]: Stage: kargs Oct 30 00:03:53.286544 ignition[814]: no configs at "/usr/lib/ignition/base.d" Oct 30 00:03:53.286554 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 30 00:03:53.289854 ignition[814]: kargs: kargs passed Oct 30 00:03:53.289945 ignition[814]: Ignition finished successfully Oct 30 00:03:53.293212 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 30 00:03:53.296831 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 30 00:03:53.339482 ignition[821]: Ignition 2.22.0 Oct 30 00:03:53.339511 ignition[821]: Stage: disks Oct 30 00:03:53.339672 ignition[821]: no configs at "/usr/lib/ignition/base.d" Oct 30 00:03:53.339681 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 30 00:03:53.344078 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 30 00:03:53.340850 ignition[821]: disks: disks passed Oct 30 00:03:53.346352 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 30 00:03:53.340935 ignition[821]: Ignition finished successfully Oct 30 00:03:53.347237 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 30 00:03:53.348509 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 30 00:03:53.350163 systemd[1]: Reached target sysinit.target - System Initialization. Oct 30 00:03:53.351587 systemd[1]: Reached target basic.target - Basic System. Oct 30 00:03:53.356337 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 30 00:03:53.383213 systemd-fsck[829]: ROOT: clean, 15/553520 files, 52789/553472 blocks Oct 30 00:03:53.387838 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 30 00:03:53.390628 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 30 00:03:53.548170 kernel: EXT4-fs (vda9): mounted filesystem 02607114-2ead-44bc-a76e-2d51f82b108e r/w with ordered data mode. Quota mode: none. Oct 30 00:03:53.549987 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 30 00:03:53.552436 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 30 00:03:53.556016 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 30 00:03:53.560240 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 30 00:03:53.564328 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Oct 30 00:03:53.573305 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 30 00:03:53.574271 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 30 00:03:53.574409 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 30 00:03:53.580351 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 30 00:03:53.591267 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 30 00:03:53.608737 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (837) Oct 30 00:03:53.608771 kernel: BTRFS info (device vda6): first mount of filesystem 20cadb25-62ee-49b8-9ff8-7ba27828b77e Oct 30 00:03:53.608784 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 30 00:03:53.608796 kernel: BTRFS info (device vda6): turning on async discard Oct 30 00:03:53.608817 kernel: BTRFS info (device vda6): enabling free space tree Oct 30 00:03:53.610672 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 30 00:03:53.660143 coreos-metadata[840]: Oct 30 00:03:53.658 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 30 00:03:53.672447 coreos-metadata[839]: Oct 30 00:03:53.672 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 30 00:03:53.681666 initrd-setup-root[867]: cut: /sysroot/etc/passwd: No such file or directory Oct 30 00:03:53.688127 initrd-setup-root[874]: cut: /sysroot/etc/group: No such file or directory Oct 30 00:03:53.692988 initrd-setup-root[881]: cut: /sysroot/etc/shadow: No such file or directory Oct 30 00:03:53.698823 coreos-metadata[840]: Oct 30 00:03:53.698 INFO Fetch successful Oct 30 00:03:53.699754 coreos-metadata[839]: Oct 30 00:03:53.698 INFO Fetch successful Oct 30 00:03:53.704543 initrd-setup-root[888]: cut: /sysroot/etc/gshadow: No such file or directory Oct 30 00:03:53.706116 coreos-metadata[840]: Oct 30 00:03:53.705 INFO wrote hostname ci-4459.1.0-n-959986c1c8 to /sysroot/etc/hostname Oct 30 00:03:53.707327 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 30 00:03:53.712323 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Oct 30 00:03:53.712437 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Oct 30 00:03:53.814401 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 30 00:03:53.817728 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 30 00:03:53.819315 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 30 00:03:53.839201 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 30 00:03:53.841711 kernel: BTRFS info (device vda6): last unmount of filesystem 20cadb25-62ee-49b8-9ff8-7ba27828b77e Oct 30 00:03:53.858094 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 30 00:03:53.884510 ignition[958]: INFO : Ignition 2.22.0 Oct 30 00:03:53.885810 ignition[958]: INFO : Stage: mount Oct 30 00:03:53.885810 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 00:03:53.885810 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 30 00:03:53.888730 ignition[958]: INFO : mount: mount passed Oct 30 00:03:53.889551 ignition[958]: INFO : Ignition finished successfully Oct 30 00:03:53.891400 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 30 00:03:53.893490 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 30 00:03:53.926529 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 30 00:03:53.953402 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (969) Oct 30 00:03:53.953475 kernel: BTRFS info (device vda6): first mount of filesystem 20cadb25-62ee-49b8-9ff8-7ba27828b77e Oct 30 00:03:53.955471 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 30 00:03:53.961419 kernel: BTRFS info (device vda6): turning on async discard Oct 30 00:03:53.961582 kernel: BTRFS info (device vda6): enabling free space tree Oct 30 00:03:53.965168 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 30 00:03:54.005413 ignition[986]: INFO : Ignition 2.22.0 Oct 30 00:03:54.007740 ignition[986]: INFO : Stage: files Oct 30 00:03:54.007740 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 00:03:54.007740 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 30 00:03:54.007740 ignition[986]: DEBUG : files: compiled without relabeling support, skipping Oct 30 00:03:54.011266 ignition[986]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 30 00:03:54.012237 ignition[986]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 30 00:03:54.016069 ignition[986]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 30 00:03:54.017452 ignition[986]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 30 00:03:54.019131 ignition[986]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 30 00:03:54.018765 unknown[986]: wrote ssh authorized keys file for user: core Oct 30 00:03:54.023563 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 30 00:03:54.023563 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Oct 30 00:03:54.130874 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 30 00:03:54.224252 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 30 00:03:54.224252 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 30 00:03:54.227922 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 30 00:03:54.227922 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 30 00:03:54.227922 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 30 00:03:54.227922 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 30 00:03:54.227922 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 30 00:03:54.227922 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 30 00:03:54.227922 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 30 00:03:54.242885 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 30 00:03:54.242885 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 30 00:03:54.242885 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 30 00:03:54.242885 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 30 00:03:54.242885 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 30 00:03:54.242885 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Oct 30 00:03:54.497275 systemd-networkd[798]: eth0: Gained IPv6LL Oct 30 00:03:54.638203 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 30 00:03:54.817301 systemd-networkd[798]: eth1: Gained IPv6LL Oct 30 00:03:55.256357 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 30 00:03:55.256357 ignition[986]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 30 00:03:55.259568 ignition[986]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 30 00:03:55.262335 ignition[986]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 30 00:03:55.262335 ignition[986]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 30 00:03:55.262335 ignition[986]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 30 00:03:55.262335 ignition[986]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 30 00:03:55.262335 ignition[986]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 30 00:03:55.262335 ignition[986]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 30 00:03:55.262335 ignition[986]: INFO : files: files passed Oct 30 00:03:55.262335 ignition[986]: INFO : Ignition finished successfully Oct 30 00:03:55.263646 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 30 00:03:55.267260 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 30 00:03:55.270256 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 30 00:03:55.284859 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 30 00:03:55.285270 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 30 00:03:55.297397 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 30 00:03:55.297397 initrd-setup-root-after-ignition[1015]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 30 00:03:55.300562 initrd-setup-root-after-ignition[1019]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 30 00:03:55.302259 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 30 00:03:55.303653 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 30 00:03:55.305762 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 30 00:03:55.362348 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 30 00:03:55.362474 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 30 00:03:55.364418 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 30 00:03:55.365743 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 30 00:03:55.367397 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 30 00:03:55.368408 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 30 00:03:55.396242 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 30 00:03:55.400345 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 30 00:03:55.429055 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 30 00:03:55.431484 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 00:03:55.432730 systemd[1]: Stopped target timers.target - Timer Units. Oct 30 00:03:55.434701 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 30 00:03:55.434930 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 30 00:03:55.437047 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 30 00:03:55.438268 systemd[1]: Stopped target basic.target - Basic System. Oct 30 00:03:55.440209 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 30 00:03:55.442092 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 30 00:03:55.443974 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 30 00:03:55.446290 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 30 00:03:55.448311 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 30 00:03:55.450245 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 30 00:03:55.452211 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 30 00:03:55.453800 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 30 00:03:55.455616 systemd[1]: Stopped target swap.target - Swaps. Oct 30 00:03:55.457186 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 30 00:03:55.457399 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 30 00:03:55.459562 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 30 00:03:55.460706 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 00:03:55.462503 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 30 00:03:55.462983 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 00:03:55.464510 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 30 00:03:55.464759 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 30 00:03:55.467256 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 30 00:03:55.467509 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 30 00:03:55.469843 systemd[1]: ignition-files.service: Deactivated successfully. Oct 30 00:03:55.470086 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 30 00:03:55.471821 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 30 00:03:55.472068 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 30 00:03:55.476405 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 30 00:03:55.477578 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 30 00:03:55.479210 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 00:03:55.483663 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 30 00:03:55.487093 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 30 00:03:55.487519 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 00:03:55.489132 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 30 00:03:55.489368 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 30 00:03:55.503456 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 30 00:03:55.503585 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 30 00:03:55.526623 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 30 00:03:55.531470 ignition[1039]: INFO : Ignition 2.22.0 Oct 30 00:03:55.533264 ignition[1039]: INFO : Stage: umount Oct 30 00:03:55.533264 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 00:03:55.533264 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 30 00:03:55.536123 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 30 00:03:55.536274 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 30 00:03:55.539725 ignition[1039]: INFO : umount: umount passed Oct 30 00:03:55.539725 ignition[1039]: INFO : Ignition finished successfully Oct 30 00:03:55.539578 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 30 00:03:55.540467 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 30 00:03:55.541866 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 30 00:03:55.541967 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 30 00:03:55.543127 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 30 00:03:55.543181 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 30 00:03:55.544762 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 30 00:03:55.544806 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 30 00:03:55.546350 systemd[1]: Stopped target network.target - Network. Oct 30 00:03:55.547792 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 30 00:03:55.547852 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 30 00:03:55.549401 systemd[1]: Stopped target paths.target - Path Units. Oct 30 00:03:55.550900 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 30 00:03:55.554280 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 00:03:55.555450 systemd[1]: Stopped target slices.target - Slice Units. Oct 30 00:03:55.557624 systemd[1]: Stopped target sockets.target - Socket Units. Oct 30 00:03:55.559509 systemd[1]: iscsid.socket: Deactivated successfully. Oct 30 00:03:55.559592 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 30 00:03:55.562166 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 30 00:03:55.562228 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 30 00:03:55.563973 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 30 00:03:55.564056 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 30 00:03:55.566034 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 30 00:03:55.566127 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 30 00:03:55.567585 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 30 00:03:55.567674 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 30 00:03:55.569451 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 30 00:03:55.570968 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 30 00:03:55.575159 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 30 00:03:55.575271 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 30 00:03:55.581531 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Oct 30 00:03:55.581912 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 30 00:03:55.582065 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 30 00:03:55.584931 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Oct 30 00:03:55.586934 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 30 00:03:55.588545 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 30 00:03:55.588595 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 30 00:03:55.591156 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 30 00:03:55.593143 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 30 00:03:55.593244 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 30 00:03:55.597261 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 30 00:03:55.597328 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 30 00:03:55.598860 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 30 00:03:55.598924 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 30 00:03:55.600035 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 30 00:03:55.600143 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 00:03:55.603709 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 00:03:55.608017 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 30 00:03:55.608705 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 30 00:03:55.620803 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 30 00:03:55.620961 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 00:03:55.622934 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 30 00:03:55.623045 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 30 00:03:55.625365 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 30 00:03:55.625463 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 30 00:03:55.627549 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 30 00:03:55.627590 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 00:03:55.629066 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 30 00:03:55.629193 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 30 00:03:55.631491 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 30 00:03:55.631549 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 30 00:03:55.633390 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 30 00:03:55.633474 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 00:03:55.636472 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 30 00:03:55.638528 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 30 00:03:55.638597 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 00:03:55.641210 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 30 00:03:55.641288 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 00:03:55.643996 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 00:03:55.644067 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:03:55.647606 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Oct 30 00:03:55.647666 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 30 00:03:55.647708 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 30 00:03:55.658047 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 30 00:03:55.658214 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 30 00:03:55.659294 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 30 00:03:55.663122 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 30 00:03:55.693320 systemd[1]: Switching root. Oct 30 00:03:55.731501 systemd-journald[193]: Journal stopped Oct 30 00:03:57.057139 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Oct 30 00:03:57.057211 kernel: SELinux: policy capability network_peer_controls=1 Oct 30 00:03:57.057228 kernel: SELinux: policy capability open_perms=1 Oct 30 00:03:57.057239 kernel: SELinux: policy capability extended_socket_class=1 Oct 30 00:03:57.057251 kernel: SELinux: policy capability always_check_network=0 Oct 30 00:03:57.057263 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 30 00:03:57.057274 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 30 00:03:57.057286 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 30 00:03:57.057304 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 30 00:03:57.057319 kernel: SELinux: policy capability userspace_initial_context=0 Oct 30 00:03:57.057336 kernel: audit: type=1403 audit(1761782636.001:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 30 00:03:57.057353 systemd[1]: Successfully loaded SELinux policy in 70.216ms. Oct 30 00:03:57.057377 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.396ms. Oct 30 00:03:57.057391 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 30 00:03:57.057404 systemd[1]: Detected virtualization kvm. Oct 30 00:03:57.057416 systemd[1]: Detected architecture x86-64. Oct 30 00:03:57.057427 systemd[1]: Detected first boot. Oct 30 00:03:57.057443 systemd[1]: Hostname set to . Oct 30 00:03:57.057455 systemd[1]: Initializing machine ID from VM UUID. Oct 30 00:03:57.057466 zram_generator::config[1083]: No configuration found. Oct 30 00:03:57.057479 kernel: Guest personality initialized and is inactive Oct 30 00:03:57.057496 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 30 00:03:57.057524 kernel: Initialized host personality Oct 30 00:03:57.057536 kernel: NET: Registered PF_VSOCK protocol family Oct 30 00:03:57.057549 systemd[1]: Populated /etc with preset unit settings. Oct 30 00:03:57.057565 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Oct 30 00:03:57.057578 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 30 00:03:57.057590 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 30 00:03:57.057603 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 30 00:03:57.057615 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 30 00:03:57.057627 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 30 00:03:57.057638 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 30 00:03:57.057650 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 30 00:03:57.057662 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 30 00:03:57.057677 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 30 00:03:57.057689 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 30 00:03:57.057706 systemd[1]: Created slice user.slice - User and Session Slice. Oct 30 00:03:57.057718 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 00:03:57.057730 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 00:03:57.057742 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 30 00:03:57.057754 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 30 00:03:57.057770 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 30 00:03:57.057783 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 30 00:03:57.057796 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 30 00:03:57.057813 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 00:03:57.057826 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 30 00:03:57.057838 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 30 00:03:57.057850 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 30 00:03:57.057863 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 30 00:03:57.057878 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 30 00:03:57.057891 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 00:03:57.057904 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 30 00:03:57.057917 systemd[1]: Reached target slices.target - Slice Units. Oct 30 00:03:57.057929 systemd[1]: Reached target swap.target - Swaps. Oct 30 00:03:57.057941 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 30 00:03:57.057955 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 30 00:03:57.057967 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 30 00:03:57.057979 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 30 00:03:57.057995 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 30 00:03:57.058007 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 00:03:57.058020 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 30 00:03:57.058032 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 30 00:03:57.058045 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 30 00:03:57.058058 systemd[1]: Mounting media.mount - External Media Directory... Oct 30 00:03:57.058069 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:03:57.058082 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 30 00:03:57.065261 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 30 00:03:57.065339 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 30 00:03:57.065353 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 30 00:03:57.065365 systemd[1]: Reached target machines.target - Containers. Oct 30 00:03:57.065378 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 30 00:03:57.065391 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 00:03:57.065403 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 30 00:03:57.065415 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 30 00:03:57.065427 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 00:03:57.065442 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 30 00:03:57.065454 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 00:03:57.065467 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 30 00:03:57.065478 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 00:03:57.065491 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 30 00:03:57.065503 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 30 00:03:57.065535 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 30 00:03:57.065553 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 30 00:03:57.065570 systemd[1]: Stopped systemd-fsck-usr.service. Oct 30 00:03:57.065587 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 00:03:57.065600 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 30 00:03:57.065615 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 30 00:03:57.065627 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 30 00:03:57.065640 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 30 00:03:57.065655 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 30 00:03:57.065667 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 30 00:03:57.065680 systemd[1]: verity-setup.service: Deactivated successfully. Oct 30 00:03:57.065693 systemd[1]: Stopped verity-setup.service. Oct 30 00:03:57.065705 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:03:57.065721 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 30 00:03:57.065732 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 30 00:03:57.065744 systemd[1]: Mounted media.mount - External Media Directory. Oct 30 00:03:57.065756 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 30 00:03:57.065768 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 30 00:03:57.065780 kernel: fuse: init (API version 7.41) Oct 30 00:03:57.065794 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 30 00:03:57.065805 kernel: loop: module loaded Oct 30 00:03:57.065817 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 00:03:57.065831 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 30 00:03:57.065844 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 30 00:03:57.065856 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 00:03:57.065867 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 00:03:57.065878 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 00:03:57.065889 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 00:03:57.065901 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 30 00:03:57.065913 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 30 00:03:57.065927 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 00:03:57.065938 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 00:03:57.065949 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 30 00:03:57.065961 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 00:03:57.065973 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 30 00:03:57.066039 systemd-journald[1163]: Collecting audit messages is disabled. Oct 30 00:03:57.066067 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 30 00:03:57.066083 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 30 00:03:57.066112 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 30 00:03:57.066128 systemd-journald[1163]: Journal started Oct 30 00:03:57.066156 systemd-journald[1163]: Runtime Journal (/run/log/journal/cedcf26f30b346b49c026f6a82ddd835) is 4.9M, max 39.2M, 34.3M free. Oct 30 00:03:56.616196 systemd[1]: Queued start job for default target multi-user.target. Oct 30 00:03:56.642292 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 30 00:03:56.642804 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 30 00:03:57.071129 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 30 00:03:57.084126 kernel: ACPI: bus type drm_connector registered Oct 30 00:03:57.094129 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 30 00:03:57.100960 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 30 00:03:57.101072 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 30 00:03:57.106171 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 30 00:03:57.113911 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 30 00:03:57.118143 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 00:03:57.124245 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 30 00:03:57.130129 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 00:03:57.134141 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 30 00:03:57.140132 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 00:03:57.146559 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 00:03:57.152136 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 30 00:03:57.177720 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 30 00:03:57.184708 systemd[1]: Started systemd-journald.service - Journal Service. Oct 30 00:03:57.190931 kernel: loop0: detected capacity change from 0 to 110984 Oct 30 00:03:57.193693 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 30 00:03:57.194369 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 30 00:03:57.199638 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 00:03:57.209751 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 30 00:03:57.201924 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 30 00:03:57.210272 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 30 00:03:57.224391 kernel: loop1: detected capacity change from 0 to 128016 Oct 30 00:03:57.238705 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 00:03:57.243802 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 30 00:03:57.265130 kernel: loop2: detected capacity change from 0 to 224512 Oct 30 00:03:57.274012 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 30 00:03:57.279388 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 30 00:03:57.283933 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 30 00:03:57.286169 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 30 00:03:57.292587 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 30 00:03:57.322139 kernel: loop3: detected capacity change from 0 to 8 Oct 30 00:03:57.325474 systemd-journald[1163]: Time spent on flushing to /var/log/journal/cedcf26f30b346b49c026f6a82ddd835 is 27.732ms for 1023 entries. Oct 30 00:03:57.325474 systemd-journald[1163]: System Journal (/var/log/journal/cedcf26f30b346b49c026f6a82ddd835) is 8M, max 195.6M, 187.6M free. Oct 30 00:03:57.371288 systemd-journald[1163]: Received client request to flush runtime journal. Oct 30 00:03:57.371425 kernel: loop4: detected capacity change from 0 to 110984 Oct 30 00:03:57.343183 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 30 00:03:57.376530 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 30 00:03:57.385645 kernel: loop5: detected capacity change from 0 to 128016 Oct 30 00:03:57.387603 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Oct 30 00:03:57.388065 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Oct 30 00:03:57.398174 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 00:03:57.399151 kernel: loop6: detected capacity change from 0 to 224512 Oct 30 00:03:57.416124 kernel: loop7: detected capacity change from 0 to 8 Oct 30 00:03:57.419264 (sd-merge)[1230]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Oct 30 00:03:57.422296 (sd-merge)[1230]: Merged extensions into '/usr'. Oct 30 00:03:57.451493 systemd[1]: Reload requested from client PID 1190 ('systemd-sysext') (unit systemd-sysext.service)... Oct 30 00:03:57.451526 systemd[1]: Reloading... Oct 30 00:03:57.672135 zram_generator::config[1257]: No configuration found. Oct 30 00:03:57.894931 ldconfig[1186]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 30 00:03:58.042245 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 30 00:03:58.042611 systemd[1]: Reloading finished in 590 ms. Oct 30 00:03:58.066326 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 30 00:03:58.068215 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 30 00:03:58.088810 systemd[1]: Starting ensure-sysext.service... Oct 30 00:03:58.092535 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 30 00:03:58.121294 systemd[1]: Reload requested from client PID 1302 ('systemctl') (unit ensure-sysext.service)... Oct 30 00:03:58.121317 systemd[1]: Reloading... Oct 30 00:03:58.130763 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 30 00:03:58.131189 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 30 00:03:58.131466 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 30 00:03:58.131702 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 30 00:03:58.133275 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 30 00:03:58.133818 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Oct 30 00:03:58.133878 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Oct 30 00:03:58.138879 systemd-tmpfiles[1303]: Detected autofs mount point /boot during canonicalization of boot. Oct 30 00:03:58.138893 systemd-tmpfiles[1303]: Skipping /boot Oct 30 00:03:58.150555 systemd-tmpfiles[1303]: Detected autofs mount point /boot during canonicalization of boot. Oct 30 00:03:58.150862 systemd-tmpfiles[1303]: Skipping /boot Oct 30 00:03:58.196208 zram_generator::config[1327]: No configuration found. Oct 30 00:03:58.428635 systemd[1]: Reloading finished in 306 ms. Oct 30 00:03:58.451204 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 30 00:03:58.459838 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 00:03:58.466988 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 00:03:58.479580 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 30 00:03:58.491436 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 30 00:03:58.501314 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 30 00:03:58.505603 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 00:03:58.513539 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 30 00:03:58.527767 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:03:58.528014 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 00:03:58.537468 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 00:03:58.555660 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 00:03:58.562767 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 00:03:58.564372 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 00:03:58.564558 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 00:03:58.564668 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:03:58.575618 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:03:58.575878 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 00:03:58.576138 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 00:03:58.576279 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 00:03:58.581052 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 30 00:03:58.581959 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:03:58.585889 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 30 00:03:58.597661 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:03:58.597913 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 00:03:58.603700 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 30 00:03:58.605348 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 00:03:58.605544 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 00:03:58.605725 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:03:58.609179 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 30 00:03:58.623084 systemd[1]: Finished ensure-sysext.service. Oct 30 00:03:58.625736 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 00:03:58.629401 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 00:03:58.631987 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 00:03:58.633277 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 00:03:58.634918 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 00:03:58.635246 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 00:03:58.640301 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 00:03:58.640483 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 00:03:58.652389 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 30 00:03:58.653197 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 30 00:03:58.659388 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 30 00:03:58.662756 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 30 00:03:58.663319 systemd-udevd[1380]: Using default interface naming scheme 'v255'. Oct 30 00:03:58.669233 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 30 00:03:58.669995 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 30 00:03:58.693252 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 30 00:03:58.704616 augenrules[1416]: No rules Oct 30 00:03:58.706571 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 00:03:58.706891 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 00:03:58.719471 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 00:03:58.725410 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 30 00:03:58.767892 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 30 00:03:58.853729 systemd-resolved[1378]: Positive Trust Anchors: Oct 30 00:03:58.853749 systemd-resolved[1378]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 30 00:03:58.853785 systemd-resolved[1378]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 30 00:03:58.860741 systemd-resolved[1378]: Using system hostname 'ci-4459.1.0-n-959986c1c8'. Oct 30 00:03:58.862881 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 30 00:03:58.864651 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 30 00:03:58.986145 systemd-networkd[1424]: lo: Link UP Oct 30 00:03:58.986155 systemd-networkd[1424]: lo: Gained carrier Oct 30 00:03:58.988669 systemd-networkd[1424]: Enumeration completed Oct 30 00:03:58.988840 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 30 00:03:58.991288 systemd[1]: Reached target network.target - Network. Oct 30 00:03:58.993491 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 30 00:03:59.001319 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 30 00:03:59.035661 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 30 00:03:59.037340 systemd[1]: Reached target sysinit.target - System Initialization. Oct 30 00:03:59.039431 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 30 00:03:59.040820 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 30 00:03:59.043223 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 30 00:03:59.044201 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 30 00:03:59.046269 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 30 00:03:59.046332 systemd[1]: Reached target paths.target - Path Units. Oct 30 00:03:59.048233 systemd[1]: Reached target time-set.target - System Time Set. Oct 30 00:03:59.049394 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 30 00:03:59.051524 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 30 00:03:59.053192 systemd[1]: Reached target timers.target - Timer Units. Oct 30 00:03:59.056341 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 30 00:03:59.062311 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 30 00:03:59.072647 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 30 00:03:59.074018 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 30 00:03:59.075555 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 30 00:03:59.085292 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 30 00:03:59.087047 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 30 00:03:59.093196 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 30 00:03:59.094538 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 30 00:03:59.127608 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 30 00:03:59.127797 systemd[1]: Reached target sockets.target - Socket Units. Oct 30 00:03:59.130134 systemd[1]: Reached target basic.target - Basic System. Oct 30 00:03:59.132289 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 30 00:03:59.132326 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 30 00:03:59.133839 systemd[1]: Starting containerd.service - containerd container runtime... Oct 30 00:03:59.138463 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 30 00:03:59.142417 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 30 00:03:59.147569 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 30 00:03:59.154275 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 30 00:03:59.162942 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 30 00:03:59.164067 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 30 00:03:59.174313 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 30 00:03:59.178987 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 30 00:03:59.183619 jq[1469]: false Oct 30 00:03:59.185316 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 30 00:03:59.189369 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 30 00:03:59.195025 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 30 00:03:59.209217 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 30 00:03:59.212593 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 30 00:03:59.220650 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 30 00:03:59.222463 systemd[1]: Starting update-engine.service - Update Engine... Oct 30 00:03:59.226387 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 30 00:03:59.243405 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 30 00:03:59.245345 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 30 00:03:59.245761 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 30 00:03:59.248171 google_oslogin_nss_cache[1471]: oslogin_cache_refresh[1471]: Refreshing passwd entry cache Oct 30 00:03:59.247347 oslogin_cache_refresh[1471]: Refreshing passwd entry cache Oct 30 00:03:59.252659 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 30 00:03:59.254500 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 30 00:03:59.260147 google_oslogin_nss_cache[1471]: oslogin_cache_refresh[1471]: Failure getting users, quitting Oct 30 00:03:59.260147 google_oslogin_nss_cache[1471]: oslogin_cache_refresh[1471]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 30 00:03:59.260147 google_oslogin_nss_cache[1471]: oslogin_cache_refresh[1471]: Refreshing group entry cache Oct 30 00:03:59.257719 oslogin_cache_refresh[1471]: Failure getting users, quitting Oct 30 00:03:59.257742 oslogin_cache_refresh[1471]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 30 00:03:59.257801 oslogin_cache_refresh[1471]: Refreshing group entry cache Oct 30 00:03:59.262960 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Oct 30 00:03:59.266136 google_oslogin_nss_cache[1471]: oslogin_cache_refresh[1471]: Failure getting groups, quitting Oct 30 00:03:59.266136 google_oslogin_nss_cache[1471]: oslogin_cache_refresh[1471]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 30 00:03:59.265251 oslogin_cache_refresh[1471]: Failure getting groups, quitting Oct 30 00:03:59.265271 oslogin_cache_refresh[1471]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 30 00:03:59.272998 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 30 00:03:59.276207 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 30 00:03:59.289756 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 30 00:03:59.301465 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Oct 30 00:03:59.303997 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 30 00:03:59.310165 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 30 00:03:59.315248 coreos-metadata[1465]: Oct 30 00:03:59.313 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 30 00:03:59.317267 systemd[1]: motdgen.service: Deactivated successfully. Oct 30 00:03:59.317921 coreos-metadata[1465]: Oct 30 00:03:59.317 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) Oct 30 00:03:59.319464 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 30 00:03:59.341761 jq[1481]: true Oct 30 00:03:59.348076 extend-filesystems[1470]: Found /dev/vda6 Oct 30 00:03:59.375705 update_engine[1480]: I20251030 00:03:59.374550 1480 main.cc:92] Flatcar Update Engine starting Oct 30 00:03:59.376660 dbus-daemon[1466]: [system] SELinux support is enabled Oct 30 00:03:59.379806 extend-filesystems[1470]: Found /dev/vda9 Oct 30 00:03:59.377197 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 30 00:03:59.401369 jq[1508]: true Oct 30 00:03:59.382153 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 30 00:03:59.382185 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 30 00:03:59.400503 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 30 00:03:59.401649 (ntainerd)[1510]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 30 00:03:59.419125 kernel: ISO 9660 Extensions: RRIP_1991A Oct 30 00:03:59.419219 extend-filesystems[1470]: Checking size of /dev/vda9 Oct 30 00:03:59.419219 extend-filesystems[1470]: Resized partition /dev/vda9 Oct 30 00:03:59.420648 extend-filesystems[1522]: resize2fs 1.47.3 (8-Jul-2025) Oct 30 00:03:59.447221 kernel: mousedev: PS/2 mouse device common for all mice Oct 30 00:03:59.447255 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Oct 30 00:03:59.447347 update_engine[1480]: I20251030 00:03:59.425972 1480 update_check_scheduler.cc:74] Next update check in 5m5s Oct 30 00:03:59.433986 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Oct 30 00:03:59.456853 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Oct 30 00:03:59.458740 tar[1484]: linux-amd64/LICENSE Oct 30 00:03:59.458740 tar[1484]: linux-amd64/helm Oct 30 00:03:59.456891 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 30 00:03:59.461850 systemd-networkd[1424]: eth0: Configuring with /run/systemd/network/10-3e:0e:20:84:84:a0.network. Oct 30 00:03:59.462912 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 30 00:03:59.465684 systemd[1]: Started update-engine.service - Update Engine. Oct 30 00:03:59.471211 systemd-networkd[1424]: eth0: Link UP Oct 30 00:03:59.472285 systemd-networkd[1424]: eth0: Gained carrier Oct 30 00:03:59.473684 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 30 00:03:59.485418 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Oct 30 00:03:59.549927 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Oct 30 00:03:59.551800 systemd-networkd[1424]: eth1: Configuring with /run/systemd/network/10-ea:d3:d5:11:c6:2c.network. Oct 30 00:03:59.559806 systemd-networkd[1424]: eth1: Link UP Oct 30 00:03:59.562416 systemd-networkd[1424]: eth1: Gained carrier Oct 30 00:03:59.569992 systemd-logind[1479]: New seat seat0. Oct 30 00:03:59.570343 extend-filesystems[1522]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 30 00:03:59.570343 extend-filesystems[1522]: old_desc_blocks = 1, new_desc_blocks = 8 Oct 30 00:03:59.570343 extend-filesystems[1522]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Oct 30 00:03:59.610224 extend-filesystems[1470]: Resized filesystem in /dev/vda9 Oct 30 00:03:59.571363 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 30 00:03:59.572372 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 30 00:03:59.600203 systemd[1]: Started systemd-logind.service - User Login Management. Oct 30 00:03:59.666246 bash[1548]: Updated "/home/core/.ssh/authorized_keys" Oct 30 00:03:59.666479 systemd-timesyncd[1407]: Contacted time server 104.234.61.117:123 (0.flatcar.pool.ntp.org). Oct 30 00:03:59.666555 systemd-timesyncd[1407]: Initial clock synchronization to Thu 2025-10-30 00:03:59.841026 UTC. Oct 30 00:03:59.668039 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 30 00:03:59.680142 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 30 00:03:59.676477 systemd[1]: Starting sshkeys.service... Oct 30 00:03:59.708888 kernel: ACPI: button: Power Button [PWRF] Oct 30 00:03:59.725051 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 30 00:03:59.725389 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 30 00:03:59.743341 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 30 00:03:59.748641 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 30 00:03:59.816690 sshd_keygen[1501]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 30 00:03:59.896386 coreos-metadata[1559]: Oct 30 00:03:59.896 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 30 00:03:59.915568 locksmithd[1529]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 30 00:03:59.917239 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 30 00:03:59.925469 coreos-metadata[1559]: Oct 30 00:03:59.923 INFO Fetch successful Oct 30 00:03:59.924737 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 30 00:03:59.952275 unknown[1559]: wrote ssh authorized keys file for user: core Oct 30 00:03:59.965708 containerd[1510]: time="2025-10-30T00:03:59Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 30 00:03:59.971757 containerd[1510]: time="2025-10-30T00:03:59.971679057Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 30 00:04:00.008243 systemd[1]: issuegen.service: Deactivated successfully. Oct 30 00:04:00.009859 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 30 00:04:00.015255 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 30 00:04:00.028880 update-ssh-keys[1583]: Updated "/home/core/.ssh/authorized_keys" Oct 30 00:04:00.031353 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 30 00:04:00.036025 systemd[1]: Finished sshkeys.service. Oct 30 00:04:00.057788 containerd[1510]: time="2025-10-30T00:04:00.053953361Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.436µs" Oct 30 00:04:00.057788 containerd[1510]: time="2025-10-30T00:04:00.056267157Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 30 00:04:00.057788 containerd[1510]: time="2025-10-30T00:04:00.056327434Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 30 00:04:00.057788 containerd[1510]: time="2025-10-30T00:04:00.056576683Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 30 00:04:00.057788 containerd[1510]: time="2025-10-30T00:04:00.056599018Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 30 00:04:00.057788 containerd[1510]: time="2025-10-30T00:04:00.056659170Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 30 00:04:00.057788 containerd[1510]: time="2025-10-30T00:04:00.056747625Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 30 00:04:00.057788 containerd[1510]: time="2025-10-30T00:04:00.056764395Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 30 00:04:00.054112 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 30 00:04:00.057792 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 30 00:04:00.061511 containerd[1510]: time="2025-10-30T00:04:00.060168484Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 30 00:04:00.061511 containerd[1510]: time="2025-10-30T00:04:00.060265791Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 30 00:04:00.061511 containerd[1510]: time="2025-10-30T00:04:00.060293618Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 30 00:04:00.061511 containerd[1510]: time="2025-10-30T00:04:00.060303948Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 30 00:04:00.061511 containerd[1510]: time="2025-10-30T00:04:00.060509095Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 30 00:04:00.061511 containerd[1510]: time="2025-10-30T00:04:00.060844822Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 30 00:04:00.061511 containerd[1510]: time="2025-10-30T00:04:00.060895030Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 30 00:04:00.061511 containerd[1510]: time="2025-10-30T00:04:00.060907361Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 30 00:04:00.061511 containerd[1510]: time="2025-10-30T00:04:00.060949795Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 30 00:04:00.062744 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 30 00:04:00.063873 systemd[1]: Reached target getty.target - Login Prompts. Oct 30 00:04:00.065589 containerd[1510]: time="2025-10-30T00:04:00.065248491Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 30 00:04:00.065589 containerd[1510]: time="2025-10-30T00:04:00.065428909Z" level=info msg="metadata content store policy set" policy=shared Oct 30 00:04:00.076223 containerd[1510]: time="2025-10-30T00:04:00.074913818Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 30 00:04:00.076223 containerd[1510]: time="2025-10-30T00:04:00.075031757Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 30 00:04:00.076223 containerd[1510]: time="2025-10-30T00:04:00.075051174Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 30 00:04:00.076223 containerd[1510]: time="2025-10-30T00:04:00.075161696Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 30 00:04:00.076223 containerd[1510]: time="2025-10-30T00:04:00.075182926Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 30 00:04:00.076223 containerd[1510]: time="2025-10-30T00:04:00.075201051Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 30 00:04:00.076223 containerd[1510]: time="2025-10-30T00:04:00.075219445Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 30 00:04:00.076223 containerd[1510]: time="2025-10-30T00:04:00.075237596Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 30 00:04:00.076223 containerd[1510]: time="2025-10-30T00:04:00.075254275Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 30 00:04:00.076223 containerd[1510]: time="2025-10-30T00:04:00.075269014Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 30 00:04:00.076223 containerd[1510]: time="2025-10-30T00:04:00.075285943Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 30 00:04:00.076223 containerd[1510]: time="2025-10-30T00:04:00.075325100Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 30 00:04:00.076223 containerd[1510]: time="2025-10-30T00:04:00.075512265Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 30 00:04:00.076223 containerd[1510]: time="2025-10-30T00:04:00.075539173Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 30 00:04:00.076587 containerd[1510]: time="2025-10-30T00:04:00.075555143Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 30 00:04:00.076587 containerd[1510]: time="2025-10-30T00:04:00.075567822Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 30 00:04:00.076587 containerd[1510]: time="2025-10-30T00:04:00.075580241Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 30 00:04:00.076587 containerd[1510]: time="2025-10-30T00:04:00.075592446Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 30 00:04:00.076587 containerd[1510]: time="2025-10-30T00:04:00.075604219Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 30 00:04:00.076587 containerd[1510]: time="2025-10-30T00:04:00.075631872Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 30 00:04:00.076587 containerd[1510]: time="2025-10-30T00:04:00.075673706Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 30 00:04:00.076587 containerd[1510]: time="2025-10-30T00:04:00.075689071Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 30 00:04:00.076587 containerd[1510]: time="2025-10-30T00:04:00.075701405Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 30 00:04:00.076587 containerd[1510]: time="2025-10-30T00:04:00.075775905Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 30 00:04:00.076587 containerd[1510]: time="2025-10-30T00:04:00.075790794Z" level=info msg="Start snapshots syncer" Oct 30 00:04:00.078971 containerd[1510]: time="2025-10-30T00:04:00.077244242Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 30 00:04:00.080523 containerd[1510]: time="2025-10-30T00:04:00.079693554Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 30 00:04:00.080523 containerd[1510]: time="2025-10-30T00:04:00.079789338Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 30 00:04:00.090819 containerd[1510]: time="2025-10-30T00:04:00.089284765Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 30 00:04:00.090819 containerd[1510]: time="2025-10-30T00:04:00.089542878Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 30 00:04:00.090819 containerd[1510]: time="2025-10-30T00:04:00.089601417Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 30 00:04:00.090819 containerd[1510]: time="2025-10-30T00:04:00.089628607Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 30 00:04:00.090819 containerd[1510]: time="2025-10-30T00:04:00.089647646Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 30 00:04:00.090819 containerd[1510]: time="2025-10-30T00:04:00.089662837Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 30 00:04:00.090819 containerd[1510]: time="2025-10-30T00:04:00.089685202Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 30 00:04:00.090819 containerd[1510]: time="2025-10-30T00:04:00.089701640Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 30 00:04:00.090819 containerd[1510]: time="2025-10-30T00:04:00.089747654Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 30 00:04:00.090819 containerd[1510]: time="2025-10-30T00:04:00.089764903Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 30 00:04:00.090819 containerd[1510]: time="2025-10-30T00:04:00.089784382Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 30 00:04:00.090819 containerd[1510]: time="2025-10-30T00:04:00.089857202Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 30 00:04:00.090819 containerd[1510]: time="2025-10-30T00:04:00.089887207Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 30 00:04:00.090819 containerd[1510]: time="2025-10-30T00:04:00.089903375Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 30 00:04:00.091380 containerd[1510]: time="2025-10-30T00:04:00.089924251Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 30 00:04:00.091380 containerd[1510]: time="2025-10-30T00:04:00.089940815Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 30 00:04:00.091380 containerd[1510]: time="2025-10-30T00:04:00.089954735Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 30 00:04:00.091380 containerd[1510]: time="2025-10-30T00:04:00.089968083Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 30 00:04:00.091380 containerd[1510]: time="2025-10-30T00:04:00.089991055Z" level=info msg="runtime interface created" Oct 30 00:04:00.091380 containerd[1510]: time="2025-10-30T00:04:00.089998326Z" level=info msg="created NRI interface" Oct 30 00:04:00.091380 containerd[1510]: time="2025-10-30T00:04:00.090012466Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 30 00:04:00.091380 containerd[1510]: time="2025-10-30T00:04:00.090035190Z" level=info msg="Connect containerd service" Oct 30 00:04:00.091380 containerd[1510]: time="2025-10-30T00:04:00.090103920Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 30 00:04:00.104348 containerd[1510]: time="2025-10-30T00:04:00.103955462Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 30 00:04:00.206201 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Oct 30 00:04:00.251587 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Oct 30 00:04:00.319012 coreos-metadata[1465]: Oct 30 00:04:00.318 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 Oct 30 00:04:00.325166 kernel: Console: switching to colour dummy device 80x25 Oct 30 00:04:00.338040 coreos-metadata[1465]: Oct 30 00:04:00.332 INFO Fetch successful Oct 30 00:04:00.372159 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 30 00:04:00.372262 kernel: [drm] features: -context_init Oct 30 00:04:00.411795 systemd-logind[1479]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 30 00:04:00.419267 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 30 00:04:00.420088 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 30 00:04:00.433871 containerd[1510]: time="2025-10-30T00:04:00.433604023Z" level=info msg="Start subscribing containerd event" Oct 30 00:04:00.433871 containerd[1510]: time="2025-10-30T00:04:00.433670082Z" level=info msg="Start recovering state" Oct 30 00:04:00.433871 containerd[1510]: time="2025-10-30T00:04:00.433775846Z" level=info msg="Start event monitor" Oct 30 00:04:00.433871 containerd[1510]: time="2025-10-30T00:04:00.433789254Z" level=info msg="Start cni network conf syncer for default" Oct 30 00:04:00.433871 containerd[1510]: time="2025-10-30T00:04:00.433799072Z" level=info msg="Start streaming server" Oct 30 00:04:00.433871 containerd[1510]: time="2025-10-30T00:04:00.433815533Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 30 00:04:00.433871 containerd[1510]: time="2025-10-30T00:04:00.433823827Z" level=info msg="runtime interface starting up..." Oct 30 00:04:00.433871 containerd[1510]: time="2025-10-30T00:04:00.433830426Z" level=info msg="starting plugins..." Oct 30 00:04:00.433871 containerd[1510]: time="2025-10-30T00:04:00.433846439Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 30 00:04:00.434223 kernel: [drm] number of scanouts: 1 Oct 30 00:04:00.435260 kernel: [drm] number of cap sets: 0 Oct 30 00:04:00.435363 containerd[1510]: time="2025-10-30T00:04:00.435327737Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 30 00:04:00.435466 containerd[1510]: time="2025-10-30T00:04:00.435416775Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 30 00:04:00.435630 containerd[1510]: time="2025-10-30T00:04:00.435509004Z" level=info msg="containerd successfully booted in 0.470454s" Oct 30 00:04:00.435712 systemd[1]: Started containerd.service - containerd container runtime. Oct 30 00:04:00.441134 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Oct 30 00:04:00.457571 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Oct 30 00:04:00.457669 kernel: Console: switching to colour frame buffer device 128x48 Oct 30 00:04:00.469185 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 30 00:04:00.506090 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:04:00.642576 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 00:04:00.644087 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:04:00.647314 systemd-logind[1479]: Watching system buttons on /dev/input/event2 (Power Button) Oct 30 00:04:00.650675 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 30 00:04:00.676990 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:04:00.753452 kernel: EDAC MC: Ver: 3.0.0 Oct 30 00:04:00.759926 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 00:04:00.761212 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:04:00.770738 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:04:00.850651 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:04:00.883931 tar[1484]: linux-amd64/README.md Oct 30 00:04:00.908223 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 30 00:04:01.218234 systemd-networkd[1424]: eth1: Gained IPv6LL Oct 30 00:04:01.221775 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 30 00:04:01.225243 systemd[1]: Reached target network-online.target - Network is Online. Oct 30 00:04:01.230197 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:04:01.235613 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 30 00:04:01.275273 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 30 00:04:01.282407 systemd-networkd[1424]: eth0: Gained IPv6LL Oct 30 00:04:02.596821 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:04:02.597784 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 30 00:04:02.600190 systemd[1]: Startup finished in 3.677s (kernel) + 6.337s (initrd) + 6.664s (userspace) = 16.679s. Oct 30 00:04:02.612564 (kubelet)[1656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 00:04:03.710640 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 30 00:04:03.715608 systemd[1]: Started sshd@0-147.182.197.56:22-139.178.89.65:56726.service - OpenSSH per-connection server daemon (139.178.89.65:56726). Oct 30 00:04:03.766484 kubelet[1656]: E1030 00:04:03.766391 1656 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 00:04:03.770773 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 00:04:03.771057 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 00:04:03.771838 systemd[1]: kubelet.service: Consumed 1.890s CPU time, 264.3M memory peak. Oct 30 00:04:03.850873 sshd[1667]: Accepted publickey for core from 139.178.89.65 port 56726 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:04:03.853708 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:04:03.872180 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 30 00:04:03.874159 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 30 00:04:03.880268 systemd-logind[1479]: New session 1 of user core. Oct 30 00:04:03.909439 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 30 00:04:03.913309 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 30 00:04:03.931339 (systemd)[1673]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 30 00:04:03.935480 systemd-logind[1479]: New session c1 of user core. Oct 30 00:04:04.111516 systemd[1673]: Queued start job for default target default.target. Oct 30 00:04:04.122819 systemd[1673]: Created slice app.slice - User Application Slice. Oct 30 00:04:04.122857 systemd[1673]: Reached target paths.target - Paths. Oct 30 00:04:04.122905 systemd[1673]: Reached target timers.target - Timers. Oct 30 00:04:04.124656 systemd[1673]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 30 00:04:04.140466 systemd[1673]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 30 00:04:04.140619 systemd[1673]: Reached target sockets.target - Sockets. Oct 30 00:04:04.140676 systemd[1673]: Reached target basic.target - Basic System. Oct 30 00:04:04.140722 systemd[1673]: Reached target default.target - Main User Target. Oct 30 00:04:04.140765 systemd[1673]: Startup finished in 194ms. Oct 30 00:04:04.140930 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 30 00:04:04.157581 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 30 00:04:04.230300 systemd[1]: Started sshd@1-147.182.197.56:22-139.178.89.65:56732.service - OpenSSH per-connection server daemon (139.178.89.65:56732). Oct 30 00:04:04.310729 sshd[1684]: Accepted publickey for core from 139.178.89.65 port 56732 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:04:04.312758 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:04:04.318899 systemd-logind[1479]: New session 2 of user core. Oct 30 00:04:04.337485 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 30 00:04:04.406265 sshd[1687]: Connection closed by 139.178.89.65 port 56732 Oct 30 00:04:04.406935 sshd-session[1684]: pam_unix(sshd:session): session closed for user core Oct 30 00:04:04.420759 systemd[1]: sshd@1-147.182.197.56:22-139.178.89.65:56732.service: Deactivated successfully. Oct 30 00:04:04.423085 systemd[1]: session-2.scope: Deactivated successfully. Oct 30 00:04:04.425196 systemd-logind[1479]: Session 2 logged out. Waiting for processes to exit. Oct 30 00:04:04.427888 systemd[1]: Started sshd@2-147.182.197.56:22-139.178.89.65:56738.service - OpenSSH per-connection server daemon (139.178.89.65:56738). Oct 30 00:04:04.430561 systemd-logind[1479]: Removed session 2. Oct 30 00:04:04.517452 sshd[1693]: Accepted publickey for core from 139.178.89.65 port 56738 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:04:04.518939 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:04:04.524719 systemd-logind[1479]: New session 3 of user core. Oct 30 00:04:04.533742 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 30 00:04:04.591838 sshd[1696]: Connection closed by 139.178.89.65 port 56738 Oct 30 00:04:04.592798 sshd-session[1693]: pam_unix(sshd:session): session closed for user core Oct 30 00:04:04.605627 systemd[1]: sshd@2-147.182.197.56:22-139.178.89.65:56738.service: Deactivated successfully. Oct 30 00:04:04.608056 systemd[1]: session-3.scope: Deactivated successfully. Oct 30 00:04:04.609057 systemd-logind[1479]: Session 3 logged out. Waiting for processes to exit. Oct 30 00:04:04.613528 systemd[1]: Started sshd@3-147.182.197.56:22-139.178.89.65:56746.service - OpenSSH per-connection server daemon (139.178.89.65:56746). Oct 30 00:04:04.614842 systemd-logind[1479]: Removed session 3. Oct 30 00:04:04.682002 sshd[1702]: Accepted publickey for core from 139.178.89.65 port 56746 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:04:04.683320 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:04:04.690147 systemd-logind[1479]: New session 4 of user core. Oct 30 00:04:04.711437 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 30 00:04:04.778481 sshd[1705]: Connection closed by 139.178.89.65 port 56746 Oct 30 00:04:04.779127 sshd-session[1702]: pam_unix(sshd:session): session closed for user core Oct 30 00:04:04.789133 systemd[1]: sshd@3-147.182.197.56:22-139.178.89.65:56746.service: Deactivated successfully. Oct 30 00:04:04.791271 systemd[1]: session-4.scope: Deactivated successfully. Oct 30 00:04:04.793215 systemd-logind[1479]: Session 4 logged out. Waiting for processes to exit. Oct 30 00:04:04.796339 systemd[1]: Started sshd@4-147.182.197.56:22-139.178.89.65:56758.service - OpenSSH per-connection server daemon (139.178.89.65:56758). Oct 30 00:04:04.797493 systemd-logind[1479]: Removed session 4. Oct 30 00:04:04.867205 sshd[1711]: Accepted publickey for core from 139.178.89.65 port 56758 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:04:04.869198 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:04:04.877391 systemd-logind[1479]: New session 5 of user core. Oct 30 00:04:04.888465 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 30 00:04:04.958978 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 30 00:04:04.959837 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 00:04:04.973916 sudo[1715]: pam_unix(sudo:session): session closed for user root Oct 30 00:04:04.978535 sshd[1714]: Connection closed by 139.178.89.65 port 56758 Oct 30 00:04:04.978339 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Oct 30 00:04:04.989518 systemd[1]: sshd@4-147.182.197.56:22-139.178.89.65:56758.service: Deactivated successfully. Oct 30 00:04:04.991971 systemd[1]: session-5.scope: Deactivated successfully. Oct 30 00:04:04.993066 systemd-logind[1479]: Session 5 logged out. Waiting for processes to exit. Oct 30 00:04:04.996403 systemd[1]: Started sshd@5-147.182.197.56:22-139.178.89.65:56770.service - OpenSSH per-connection server daemon (139.178.89.65:56770). Oct 30 00:04:04.998419 systemd-logind[1479]: Removed session 5. Oct 30 00:04:05.061262 sshd[1721]: Accepted publickey for core from 139.178.89.65 port 56770 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:04:05.063327 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:04:05.070875 systemd-logind[1479]: New session 6 of user core. Oct 30 00:04:05.078603 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 30 00:04:05.141230 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 30 00:04:05.141584 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 00:04:05.149930 sudo[1726]: pam_unix(sudo:session): session closed for user root Oct 30 00:04:05.157714 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 30 00:04:05.157998 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 00:04:05.170901 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 00:04:05.223570 augenrules[1748]: No rules Oct 30 00:04:05.225597 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 00:04:05.225932 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 00:04:05.227143 sudo[1725]: pam_unix(sudo:session): session closed for user root Oct 30 00:04:05.230894 sshd[1724]: Connection closed by 139.178.89.65 port 56770 Oct 30 00:04:05.231380 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Oct 30 00:04:05.241339 systemd[1]: sshd@5-147.182.197.56:22-139.178.89.65:56770.service: Deactivated successfully. Oct 30 00:04:05.243376 systemd[1]: session-6.scope: Deactivated successfully. Oct 30 00:04:05.244342 systemd-logind[1479]: Session 6 logged out. Waiting for processes to exit. Oct 30 00:04:05.247536 systemd[1]: Started sshd@6-147.182.197.56:22-139.178.89.65:56778.service - OpenSSH per-connection server daemon (139.178.89.65:56778). Oct 30 00:04:05.249735 systemd-logind[1479]: Removed session 6. Oct 30 00:04:05.316426 sshd[1757]: Accepted publickey for core from 139.178.89.65 port 56778 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:04:05.317964 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:04:05.385148 systemd-logind[1479]: New session 7 of user core. Oct 30 00:04:05.394403 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 30 00:04:05.454274 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 30 00:04:05.454637 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 00:04:06.060803 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 30 00:04:06.076875 (dockerd)[1779]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 30 00:04:06.529315 dockerd[1779]: time="2025-10-30T00:04:06.529154305Z" level=info msg="Starting up" Oct 30 00:04:06.531466 dockerd[1779]: time="2025-10-30T00:04:06.531407031Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 30 00:04:06.549288 dockerd[1779]: time="2025-10-30T00:04:06.549204879Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 30 00:04:06.571302 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport669602080-merged.mount: Deactivated successfully. Oct 30 00:04:06.615068 dockerd[1779]: time="2025-10-30T00:04:06.614801879Z" level=info msg="Loading containers: start." Oct 30 00:04:06.630195 kernel: Initializing XFRM netlink socket Oct 30 00:04:06.982468 systemd-networkd[1424]: docker0: Link UP Oct 30 00:04:06.990199 dockerd[1779]: time="2025-10-30T00:04:06.989280612Z" level=info msg="Loading containers: done." Oct 30 00:04:07.008887 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck200677636-merged.mount: Deactivated successfully. Oct 30 00:04:07.011665 dockerd[1779]: time="2025-10-30T00:04:07.011214015Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 30 00:04:07.011665 dockerd[1779]: time="2025-10-30T00:04:07.011323445Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 30 00:04:07.011665 dockerd[1779]: time="2025-10-30T00:04:07.011429156Z" level=info msg="Initializing buildkit" Oct 30 00:04:07.048694 dockerd[1779]: time="2025-10-30T00:04:07.048613355Z" level=info msg="Completed buildkit initialization" Oct 30 00:04:07.063030 dockerd[1779]: time="2025-10-30T00:04:07.062924627Z" level=info msg="Daemon has completed initialization" Oct 30 00:04:07.063535 dockerd[1779]: time="2025-10-30T00:04:07.063312481Z" level=info msg="API listen on /run/docker.sock" Oct 30 00:04:07.063916 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 30 00:04:08.148569 containerd[1510]: time="2025-10-30T00:04:08.148512223Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 30 00:04:08.716345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3317471659.mount: Deactivated successfully. Oct 30 00:04:10.121750 containerd[1510]: time="2025-10-30T00:04:10.120676486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:10.121750 containerd[1510]: time="2025-10-30T00:04:10.121701081Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Oct 30 00:04:10.122321 containerd[1510]: time="2025-10-30T00:04:10.122291674Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:10.124263 containerd[1510]: time="2025-10-30T00:04:10.124231104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:10.125670 containerd[1510]: time="2025-10-30T00:04:10.125626062Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.976567617s" Oct 30 00:04:10.125853 containerd[1510]: time="2025-10-30T00:04:10.125828657Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Oct 30 00:04:10.126651 containerd[1510]: time="2025-10-30T00:04:10.126597312Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 30 00:04:11.879959 containerd[1510]: time="2025-10-30T00:04:11.879890376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:11.881180 containerd[1510]: time="2025-10-30T00:04:11.881126058Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Oct 30 00:04:11.882058 containerd[1510]: time="2025-10-30T00:04:11.882020347Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:11.885883 containerd[1510]: time="2025-10-30T00:04:11.885833865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:11.887699 containerd[1510]: time="2025-10-30T00:04:11.887649236Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.760871846s" Oct 30 00:04:11.887861 containerd[1510]: time="2025-10-30T00:04:11.887835960Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Oct 30 00:04:11.888444 containerd[1510]: time="2025-10-30T00:04:11.888406632Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 30 00:04:13.250520 containerd[1510]: time="2025-10-30T00:04:13.250441094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:13.251576 containerd[1510]: time="2025-10-30T00:04:13.251526078Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Oct 30 00:04:13.253136 containerd[1510]: time="2025-10-30T00:04:13.252177966Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:13.254797 containerd[1510]: time="2025-10-30T00:04:13.254752947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:13.255812 containerd[1510]: time="2025-10-30T00:04:13.255687067Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.367247438s" Oct 30 00:04:13.255812 containerd[1510]: time="2025-10-30T00:04:13.255723744Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Oct 30 00:04:13.256571 containerd[1510]: time="2025-10-30T00:04:13.256429196Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 30 00:04:13.971933 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 30 00:04:13.976550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:04:14.246425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:04:14.256617 (kubelet)[2075]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 00:04:14.326296 kubelet[2075]: E1030 00:04:14.326230 2075 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 00:04:14.332798 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 00:04:14.332969 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 00:04:14.333529 systemd[1]: kubelet.service: Consumed 280ms CPU time, 111.1M memory peak. Oct 30 00:04:14.596233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount913563575.mount: Deactivated successfully. Oct 30 00:04:15.292797 containerd[1510]: time="2025-10-30T00:04:15.292697774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:15.294037 containerd[1510]: time="2025-10-30T00:04:15.293983110Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:15.294149 containerd[1510]: time="2025-10-30T00:04:15.294060481Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Oct 30 00:04:15.296905 containerd[1510]: time="2025-10-30T00:04:15.296863052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:15.298358 containerd[1510]: time="2025-10-30T00:04:15.298310087Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.041836094s" Oct 30 00:04:15.298474 containerd[1510]: time="2025-10-30T00:04:15.298459291Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Oct 30 00:04:15.299314 containerd[1510]: time="2025-10-30T00:04:15.299258800Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 30 00:04:15.301057 systemd-resolved[1378]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Oct 30 00:04:15.771940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4093887797.mount: Deactivated successfully. Oct 30 00:04:16.768738 containerd[1510]: time="2025-10-30T00:04:16.768676852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:16.772773 containerd[1510]: time="2025-10-30T00:04:16.772727598Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Oct 30 00:04:16.773236 containerd[1510]: time="2025-10-30T00:04:16.773190286Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:16.775652 containerd[1510]: time="2025-10-30T00:04:16.775596106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:16.776774 containerd[1510]: time="2025-10-30T00:04:16.776603060Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.477314568s" Oct 30 00:04:16.776774 containerd[1510]: time="2025-10-30T00:04:16.776641961Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Oct 30 00:04:16.777278 containerd[1510]: time="2025-10-30T00:04:16.777256482Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 30 00:04:17.565521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3249340089.mount: Deactivated successfully. Oct 30 00:04:17.572089 containerd[1510]: time="2025-10-30T00:04:17.572020072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 00:04:17.574124 containerd[1510]: time="2025-10-30T00:04:17.574025816Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 30 00:04:17.574808 containerd[1510]: time="2025-10-30T00:04:17.574768100Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 00:04:17.578277 containerd[1510]: time="2025-10-30T00:04:17.578209747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 00:04:17.580642 containerd[1510]: time="2025-10-30T00:04:17.580445519Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 803.149936ms" Oct 30 00:04:17.580642 containerd[1510]: time="2025-10-30T00:04:17.580513198Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 30 00:04:17.581194 containerd[1510]: time="2025-10-30T00:04:17.581140939Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 30 00:04:18.130392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3430138812.mount: Deactivated successfully. Oct 30 00:04:18.369346 systemd-resolved[1378]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Oct 30 00:04:20.067146 containerd[1510]: time="2025-10-30T00:04:20.067055828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:20.068457 containerd[1510]: time="2025-10-30T00:04:20.068409402Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Oct 30 00:04:20.069088 containerd[1510]: time="2025-10-30T00:04:20.069057759Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:20.071808 containerd[1510]: time="2025-10-30T00:04:20.071767307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:20.072916 containerd[1510]: time="2025-10-30T00:04:20.072885101Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.491558149s" Oct 30 00:04:20.072979 containerd[1510]: time="2025-10-30T00:04:20.072920608Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Oct 30 00:04:23.007006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:04:23.007237 systemd[1]: kubelet.service: Consumed 280ms CPU time, 111.1M memory peak. Oct 30 00:04:23.011167 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:04:23.046484 systemd[1]: Reload requested from client PID 2223 ('systemctl') (unit session-7.scope)... Oct 30 00:04:23.046503 systemd[1]: Reloading... Oct 30 00:04:23.196254 zram_generator::config[2275]: No configuration found. Oct 30 00:04:23.474570 systemd[1]: Reloading finished in 427 ms. Oct 30 00:04:23.539644 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 30 00:04:23.539732 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 30 00:04:23.539985 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:04:23.540043 systemd[1]: kubelet.service: Consumed 115ms CPU time, 97.6M memory peak. Oct 30 00:04:23.541926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:04:23.706143 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:04:23.717573 (kubelet)[2320]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 00:04:23.783210 kubelet[2320]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 00:04:23.783210 kubelet[2320]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 00:04:23.783210 kubelet[2320]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 00:04:23.783210 kubelet[2320]: I1030 00:04:23.782594 2320 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 00:04:24.486451 kubelet[2320]: I1030 00:04:24.486399 2320 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 30 00:04:24.486642 kubelet[2320]: I1030 00:04:24.486631 2320 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 00:04:24.487032 kubelet[2320]: I1030 00:04:24.487012 2320 server.go:954] "Client rotation is on, will bootstrap in background" Oct 30 00:04:24.516240 kubelet[2320]: I1030 00:04:24.516190 2320 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 00:04:24.520185 kubelet[2320]: E1030 00:04:24.519066 2320 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.182.197.56:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.182.197.56:6443: connect: connection refused" logger="UnhandledError" Oct 30 00:04:24.528351 kubelet[2320]: I1030 00:04:24.528302 2320 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 30 00:04:24.533462 kubelet[2320]: I1030 00:04:24.533427 2320 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 30 00:04:24.536622 kubelet[2320]: I1030 00:04:24.536529 2320 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 00:04:24.536847 kubelet[2320]: I1030 00:04:24.536608 2320 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.1.0-n-959986c1c8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 00:04:24.538502 kubelet[2320]: I1030 00:04:24.538435 2320 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 00:04:24.538502 kubelet[2320]: I1030 00:04:24.538471 2320 container_manager_linux.go:304] "Creating device plugin manager" Oct 30 00:04:24.539770 kubelet[2320]: I1030 00:04:24.539706 2320 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:04:24.543118 kubelet[2320]: I1030 00:04:24.543065 2320 kubelet.go:446] "Attempting to sync node with API server" Oct 30 00:04:24.543277 kubelet[2320]: I1030 00:04:24.543250 2320 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 00:04:24.543327 kubelet[2320]: I1030 00:04:24.543296 2320 kubelet.go:352] "Adding apiserver pod source" Oct 30 00:04:24.543351 kubelet[2320]: I1030 00:04:24.543341 2320 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 00:04:24.549929 kubelet[2320]: W1030 00:04:24.549219 2320 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.182.197.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-n-959986c1c8&limit=500&resourceVersion=0": dial tcp 147.182.197.56:6443: connect: connection refused Oct 30 00:04:24.549929 kubelet[2320]: E1030 00:04:24.549313 2320 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.182.197.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-n-959986c1c8&limit=500&resourceVersion=0\": dial tcp 147.182.197.56:6443: connect: connection refused" logger="UnhandledError" Oct 30 00:04:24.549929 kubelet[2320]: W1030 00:04:24.549707 2320 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.182.197.56:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.182.197.56:6443: connect: connection refused Oct 30 00:04:24.549929 kubelet[2320]: E1030 00:04:24.549743 2320 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.182.197.56:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.182.197.56:6443: connect: connection refused" logger="UnhandledError" Oct 30 00:04:24.551553 kubelet[2320]: I1030 00:04:24.551523 2320 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 30 00:04:24.555686 kubelet[2320]: I1030 00:04:24.555654 2320 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 30 00:04:24.555945 kubelet[2320]: W1030 00:04:24.555930 2320 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 30 00:04:24.561132 kubelet[2320]: I1030 00:04:24.560768 2320 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 30 00:04:24.561132 kubelet[2320]: I1030 00:04:24.560816 2320 server.go:1287] "Started kubelet" Oct 30 00:04:24.562376 kubelet[2320]: I1030 00:04:24.561851 2320 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 00:04:24.564829 kubelet[2320]: I1030 00:04:24.563951 2320 server.go:479] "Adding debug handlers to kubelet server" Oct 30 00:04:24.569913 kubelet[2320]: I1030 00:04:24.569881 2320 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 00:04:24.572112 kubelet[2320]: I1030 00:04:24.571664 2320 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 00:04:24.575991 kubelet[2320]: I1030 00:04:24.575666 2320 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 00:04:24.580928 kubelet[2320]: I1030 00:04:24.580889 2320 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 00:04:24.585230 kubelet[2320]: I1030 00:04:24.583823 2320 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 30 00:04:24.585230 kubelet[2320]: E1030 00:04:24.584492 2320 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-959986c1c8\" not found" Oct 30 00:04:24.586021 kubelet[2320]: E1030 00:04:24.585961 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.182.197.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-959986c1c8?timeout=10s\": dial tcp 147.182.197.56:6443: connect: connection refused" interval="200ms" Oct 30 00:04:24.588983 kubelet[2320]: I1030 00:04:24.588455 2320 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 30 00:04:24.588983 kubelet[2320]: I1030 00:04:24.588522 2320 reconciler.go:26] "Reconciler: start to sync state" Oct 30 00:04:24.589220 kubelet[2320]: E1030 00:04:24.586437 2320 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.182.197.56:6443/api/v1/namespaces/default/events\": dial tcp 147.182.197.56:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.1.0-n-959986c1c8.18731bffc4964cda default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.1.0-n-959986c1c8,UID:ci-4459.1.0-n-959986c1c8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.1.0-n-959986c1c8,},FirstTimestamp:2025-10-30 00:04:24.560790746 +0000 UTC m=+0.838755710,LastTimestamp:2025-10-30 00:04:24.560790746 +0000 UTC m=+0.838755710,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.1.0-n-959986c1c8,}" Oct 30 00:04:24.589560 kubelet[2320]: I1030 00:04:24.589543 2320 factory.go:221] Registration of the systemd container factory successfully Oct 30 00:04:24.589760 kubelet[2320]: I1030 00:04:24.589740 2320 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 00:04:24.592724 kubelet[2320]: W1030 00:04:24.591021 2320 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.182.197.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.182.197.56:6443: connect: connection refused Oct 30 00:04:24.594079 kubelet[2320]: E1030 00:04:24.594038 2320 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.182.197.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.182.197.56:6443: connect: connection refused" logger="UnhandledError" Oct 30 00:04:24.598190 kubelet[2320]: I1030 00:04:24.598134 2320 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 30 00:04:24.600306 kubelet[2320]: I1030 00:04:24.600276 2320 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 30 00:04:24.600306 kubelet[2320]: I1030 00:04:24.600309 2320 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 30 00:04:24.600437 kubelet[2320]: I1030 00:04:24.600345 2320 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 00:04:24.600437 kubelet[2320]: I1030 00:04:24.600355 2320 kubelet.go:2382] "Starting kubelet main sync loop" Oct 30 00:04:24.600437 kubelet[2320]: E1030 00:04:24.600403 2320 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 00:04:24.605195 kubelet[2320]: I1030 00:04:24.605169 2320 factory.go:221] Registration of the containerd container factory successfully Oct 30 00:04:24.608372 kubelet[2320]: W1030 00:04:24.608301 2320 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.182.197.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.182.197.56:6443: connect: connection refused Oct 30 00:04:24.608482 kubelet[2320]: E1030 00:04:24.608388 2320 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.182.197.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.182.197.56:6443: connect: connection refused" logger="UnhandledError" Oct 30 00:04:24.612943 kubelet[2320]: E1030 00:04:24.612915 2320 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 00:04:24.632053 kubelet[2320]: I1030 00:04:24.632025 2320 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 00:04:24.632366 kubelet[2320]: I1030 00:04:24.632325 2320 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 00:04:24.632471 kubelet[2320]: I1030 00:04:24.632461 2320 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:04:24.635859 kubelet[2320]: I1030 00:04:24.635831 2320 policy_none.go:49] "None policy: Start" Oct 30 00:04:24.636076 kubelet[2320]: I1030 00:04:24.636062 2320 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 30 00:04:24.636551 kubelet[2320]: I1030 00:04:24.636214 2320 state_mem.go:35] "Initializing new in-memory state store" Oct 30 00:04:24.644508 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 30 00:04:24.659084 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 30 00:04:24.663882 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 30 00:04:24.683421 kubelet[2320]: I1030 00:04:24.683371 2320 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 30 00:04:24.684496 kubelet[2320]: I1030 00:04:24.684373 2320 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 00:04:24.684669 kubelet[2320]: I1030 00:04:24.684392 2320 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 00:04:24.685199 kubelet[2320]: E1030 00:04:24.684594 2320 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-959986c1c8\" not found" Oct 30 00:04:24.686031 kubelet[2320]: I1030 00:04:24.685926 2320 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 00:04:24.686853 kubelet[2320]: E1030 00:04:24.686739 2320 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 00:04:24.686853 kubelet[2320]: E1030 00:04:24.686780 2320 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.1.0-n-959986c1c8\" not found" Oct 30 00:04:24.712603 systemd[1]: Created slice kubepods-burstable-poddabc3e21793fd3624d96ded1d9b74327.slice - libcontainer container kubepods-burstable-poddabc3e21793fd3624d96ded1d9b74327.slice. Oct 30 00:04:24.733726 kubelet[2320]: E1030 00:04:24.733485 2320 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-959986c1c8\" not found" node="ci-4459.1.0-n-959986c1c8" Oct 30 00:04:24.737672 systemd[1]: Created slice kubepods-burstable-podfb943031b057a97e307a24bb53be2153.slice - libcontainer container kubepods-burstable-podfb943031b057a97e307a24bb53be2153.slice. Oct 30 00:04:24.742133 kubelet[2320]: E1030 00:04:24.742086 2320 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-959986c1c8\" not found" node="ci-4459.1.0-n-959986c1c8" Oct 30 00:04:24.746539 systemd[1]: Created slice kubepods-burstable-pod5b39b6432db2bac85e54d99f5f8010da.slice - libcontainer container kubepods-burstable-pod5b39b6432db2bac85e54d99f5f8010da.slice. Oct 30 00:04:24.748926 kubelet[2320]: E1030 00:04:24.748669 2320 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-959986c1c8\" not found" node="ci-4459.1.0-n-959986c1c8" Oct 30 00:04:24.787720 kubelet[2320]: E1030 00:04:24.787563 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.182.197.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-959986c1c8?timeout=10s\": dial tcp 147.182.197.56:6443: connect: connection refused" interval="400ms" Oct 30 00:04:24.788672 kubelet[2320]: I1030 00:04:24.788610 2320 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-959986c1c8" Oct 30 00:04:24.789344 kubelet[2320]: E1030 00:04:24.789207 2320 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.182.197.56:6443/api/v1/nodes\": dial tcp 147.182.197.56:6443: connect: connection refused" node="ci-4459.1.0-n-959986c1c8" Oct 30 00:04:24.789344 kubelet[2320]: I1030 00:04:24.789243 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dabc3e21793fd3624d96ded1d9b74327-kubeconfig\") pod \"kube-scheduler-ci-4459.1.0-n-959986c1c8\" (UID: \"dabc3e21793fd3624d96ded1d9b74327\") " pod="kube-system/kube-scheduler-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:24.789344 kubelet[2320]: I1030 00:04:24.789271 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fb943031b057a97e307a24bb53be2153-ca-certs\") pod \"kube-apiserver-ci-4459.1.0-n-959986c1c8\" (UID: \"fb943031b057a97e307a24bb53be2153\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:24.789344 kubelet[2320]: I1030 00:04:24.789288 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fb943031b057a97e307a24bb53be2153-k8s-certs\") pod \"kube-apiserver-ci-4459.1.0-n-959986c1c8\" (UID: \"fb943031b057a97e307a24bb53be2153\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:24.789344 kubelet[2320]: I1030 00:04:24.789304 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fb943031b057a97e307a24bb53be2153-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.1.0-n-959986c1c8\" (UID: \"fb943031b057a97e307a24bb53be2153\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:24.789576 kubelet[2320]: I1030 00:04:24.789323 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5b39b6432db2bac85e54d99f5f8010da-ca-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-959986c1c8\" (UID: \"5b39b6432db2bac85e54d99f5f8010da\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:24.789576 kubelet[2320]: I1030 00:04:24.789340 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5b39b6432db2bac85e54d99f5f8010da-k8s-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-959986c1c8\" (UID: \"5b39b6432db2bac85e54d99f5f8010da\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:24.789576 kubelet[2320]: I1030 00:04:24.789372 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5b39b6432db2bac85e54d99f5f8010da-kubeconfig\") pod \"kube-controller-manager-ci-4459.1.0-n-959986c1c8\" (UID: \"5b39b6432db2bac85e54d99f5f8010da\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:24.789576 kubelet[2320]: I1030 00:04:24.789393 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5b39b6432db2bac85e54d99f5f8010da-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.1.0-n-959986c1c8\" (UID: \"5b39b6432db2bac85e54d99f5f8010da\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:24.789576 kubelet[2320]: I1030 00:04:24.789443 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5b39b6432db2bac85e54d99f5f8010da-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.1.0-n-959986c1c8\" (UID: \"5b39b6432db2bac85e54d99f5f8010da\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:24.991025 kubelet[2320]: I1030 00:04:24.990704 2320 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-959986c1c8" Oct 30 00:04:24.991252 kubelet[2320]: E1030 00:04:24.991202 2320 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.182.197.56:6443/api/v1/nodes\": dial tcp 147.182.197.56:6443: connect: connection refused" node="ci-4459.1.0-n-959986c1c8" Oct 30 00:04:25.035074 kubelet[2320]: E1030 00:04:25.034953 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:25.036931 containerd[1510]: time="2025-10-30T00:04:25.036877145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.1.0-n-959986c1c8,Uid:dabc3e21793fd3624d96ded1d9b74327,Namespace:kube-system,Attempt:0,}" Oct 30 00:04:25.043952 kubelet[2320]: E1030 00:04:25.043906 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:25.044480 containerd[1510]: time="2025-10-30T00:04:25.044441003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.1.0-n-959986c1c8,Uid:fb943031b057a97e307a24bb53be2153,Namespace:kube-system,Attempt:0,}" Oct 30 00:04:25.049996 kubelet[2320]: E1030 00:04:25.049858 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:25.051410 containerd[1510]: time="2025-10-30T00:04:25.051204775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.1.0-n-959986c1c8,Uid:5b39b6432db2bac85e54d99f5f8010da,Namespace:kube-system,Attempt:0,}" Oct 30 00:04:25.162494 containerd[1510]: time="2025-10-30T00:04:25.162412499Z" level=info msg="connecting to shim 3a5cc9272d1dcde7ef0b62b4735bb566d87e841ef0f392d2f53eb183ffa55af2" address="unix:///run/containerd/s/368f5b39e442ed63f724907042a15045002ad5ba2db733223ec80009f085626e" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:04:25.180498 containerd[1510]: time="2025-10-30T00:04:25.180062312Z" level=info msg="connecting to shim 06d8745701a19c1ddfc2843d5630a26e003ae8ddd2d32c8b49ab0157567e696c" address="unix:///run/containerd/s/eb841a85f7b1489ce9e35507c7683ca4453b40f0c11a5f509dfc6f2789ae5aef" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:04:25.186168 containerd[1510]: time="2025-10-30T00:04:25.185308091Z" level=info msg="connecting to shim c1d5a2443eee18d23065f739481375b9f07b86c7afe1adb53db4c4834255a8d9" address="unix:///run/containerd/s/27c814821b746b1bed79257ff7e38b30f506d94f9ba875a404d8d8d5ff04b74d" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:04:25.190744 kubelet[2320]: E1030 00:04:25.190683 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.182.197.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-959986c1c8?timeout=10s\": dial tcp 147.182.197.56:6443: connect: connection refused" interval="800ms" Oct 30 00:04:25.288619 systemd[1]: Started cri-containerd-c1d5a2443eee18d23065f739481375b9f07b86c7afe1adb53db4c4834255a8d9.scope - libcontainer container c1d5a2443eee18d23065f739481375b9f07b86c7afe1adb53db4c4834255a8d9. Oct 30 00:04:25.296844 systemd[1]: Started cri-containerd-06d8745701a19c1ddfc2843d5630a26e003ae8ddd2d32c8b49ab0157567e696c.scope - libcontainer container 06d8745701a19c1ddfc2843d5630a26e003ae8ddd2d32c8b49ab0157567e696c. Oct 30 00:04:25.299598 systemd[1]: Started cri-containerd-3a5cc9272d1dcde7ef0b62b4735bb566d87e841ef0f392d2f53eb183ffa55af2.scope - libcontainer container 3a5cc9272d1dcde7ef0b62b4735bb566d87e841ef0f392d2f53eb183ffa55af2. Oct 30 00:04:25.393390 kubelet[2320]: I1030 00:04:25.393358 2320 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-959986c1c8" Oct 30 00:04:25.394216 kubelet[2320]: E1030 00:04:25.394177 2320 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.182.197.56:6443/api/v1/nodes\": dial tcp 147.182.197.56:6443: connect: connection refused" node="ci-4459.1.0-n-959986c1c8" Oct 30 00:04:25.410118 containerd[1510]: time="2025-10-30T00:04:25.410058296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.1.0-n-959986c1c8,Uid:dabc3e21793fd3624d96ded1d9b74327,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1d5a2443eee18d23065f739481375b9f07b86c7afe1adb53db4c4834255a8d9\"" Oct 30 00:04:25.416290 kubelet[2320]: E1030 00:04:25.415741 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:25.427583 containerd[1510]: time="2025-10-30T00:04:25.427531410Z" level=info msg="CreateContainer within sandbox \"c1d5a2443eee18d23065f739481375b9f07b86c7afe1adb53db4c4834255a8d9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 30 00:04:25.437493 containerd[1510]: time="2025-10-30T00:04:25.437441844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.1.0-n-959986c1c8,Uid:fb943031b057a97e307a24bb53be2153,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a5cc9272d1dcde7ef0b62b4735bb566d87e841ef0f392d2f53eb183ffa55af2\"" Oct 30 00:04:25.456036 containerd[1510]: time="2025-10-30T00:04:25.454204034Z" level=info msg="Container a834094696d92742ff5add030b6ca9ce16f3f2530f878be94ce625b85a3e2095: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:04:25.461626 kubelet[2320]: E1030 00:04:25.461593 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:25.462744 containerd[1510]: time="2025-10-30T00:04:25.462701110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.1.0-n-959986c1c8,Uid:5b39b6432db2bac85e54d99f5f8010da,Namespace:kube-system,Attempt:0,} returns sandbox id \"06d8745701a19c1ddfc2843d5630a26e003ae8ddd2d32c8b49ab0157567e696c\"" Oct 30 00:04:25.466896 kubelet[2320]: E1030 00:04:25.466832 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:25.467609 containerd[1510]: time="2025-10-30T00:04:25.467576502Z" level=info msg="CreateContainer within sandbox \"3a5cc9272d1dcde7ef0b62b4735bb566d87e841ef0f392d2f53eb183ffa55af2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 30 00:04:25.468902 containerd[1510]: time="2025-10-30T00:04:25.468873003Z" level=info msg="CreateContainer within sandbox \"06d8745701a19c1ddfc2843d5630a26e003ae8ddd2d32c8b49ab0157567e696c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 30 00:04:25.475169 containerd[1510]: time="2025-10-30T00:04:25.475094733Z" level=info msg="CreateContainer within sandbox \"c1d5a2443eee18d23065f739481375b9f07b86c7afe1adb53db4c4834255a8d9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a834094696d92742ff5add030b6ca9ce16f3f2530f878be94ce625b85a3e2095\"" Oct 30 00:04:25.478038 containerd[1510]: time="2025-10-30T00:04:25.477957830Z" level=info msg="StartContainer for \"a834094696d92742ff5add030b6ca9ce16f3f2530f878be94ce625b85a3e2095\"" Oct 30 00:04:25.479608 containerd[1510]: time="2025-10-30T00:04:25.479512963Z" level=info msg="connecting to shim a834094696d92742ff5add030b6ca9ce16f3f2530f878be94ce625b85a3e2095" address="unix:///run/containerd/s/27c814821b746b1bed79257ff7e38b30f506d94f9ba875a404d8d8d5ff04b74d" protocol=ttrpc version=3 Oct 30 00:04:25.482794 containerd[1510]: time="2025-10-30T00:04:25.482726444Z" level=info msg="Container f9f178adb166726cc441f9248a2e31a222afd2552329e2af712fd1bd939b7ce7: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:04:25.487532 containerd[1510]: time="2025-10-30T00:04:25.487425797Z" level=info msg="Container a5d041813030d522e3f4c04fb11678b2fb086e8c3b34d1fcb3273de3db909b52: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:04:25.492444 containerd[1510]: time="2025-10-30T00:04:25.492397690Z" level=info msg="CreateContainer within sandbox \"06d8745701a19c1ddfc2843d5630a26e003ae8ddd2d32c8b49ab0157567e696c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f9f178adb166726cc441f9248a2e31a222afd2552329e2af712fd1bd939b7ce7\"" Oct 30 00:04:25.493519 containerd[1510]: time="2025-10-30T00:04:25.493459581Z" level=info msg="StartContainer for \"f9f178adb166726cc441f9248a2e31a222afd2552329e2af712fd1bd939b7ce7\"" Oct 30 00:04:25.494918 containerd[1510]: time="2025-10-30T00:04:25.494879874Z" level=info msg="connecting to shim f9f178adb166726cc441f9248a2e31a222afd2552329e2af712fd1bd939b7ce7" address="unix:///run/containerd/s/eb841a85f7b1489ce9e35507c7683ca4453b40f0c11a5f509dfc6f2789ae5aef" protocol=ttrpc version=3 Oct 30 00:04:25.503472 containerd[1510]: time="2025-10-30T00:04:25.503129540Z" level=info msg="CreateContainer within sandbox \"3a5cc9272d1dcde7ef0b62b4735bb566d87e841ef0f392d2f53eb183ffa55af2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a5d041813030d522e3f4c04fb11678b2fb086e8c3b34d1fcb3273de3db909b52\"" Oct 30 00:04:25.505240 containerd[1510]: time="2025-10-30T00:04:25.504538025Z" level=info msg="StartContainer for \"a5d041813030d522e3f4c04fb11678b2fb086e8c3b34d1fcb3273de3db909b52\"" Oct 30 00:04:25.506449 containerd[1510]: time="2025-10-30T00:04:25.506413954Z" level=info msg="connecting to shim a5d041813030d522e3f4c04fb11678b2fb086e8c3b34d1fcb3273de3db909b52" address="unix:///run/containerd/s/368f5b39e442ed63f724907042a15045002ad5ba2db733223ec80009f085626e" protocol=ttrpc version=3 Oct 30 00:04:25.510340 systemd[1]: Started cri-containerd-a834094696d92742ff5add030b6ca9ce16f3f2530f878be94ce625b85a3e2095.scope - libcontainer container a834094696d92742ff5add030b6ca9ce16f3f2530f878be94ce625b85a3e2095. Oct 30 00:04:25.533505 systemd[1]: Started cri-containerd-f9f178adb166726cc441f9248a2e31a222afd2552329e2af712fd1bd939b7ce7.scope - libcontainer container f9f178adb166726cc441f9248a2e31a222afd2552329e2af712fd1bd939b7ce7. Oct 30 00:04:25.549531 systemd[1]: Started cri-containerd-a5d041813030d522e3f4c04fb11678b2fb086e8c3b34d1fcb3273de3db909b52.scope - libcontainer container a5d041813030d522e3f4c04fb11678b2fb086e8c3b34d1fcb3273de3db909b52. Oct 30 00:04:25.620908 containerd[1510]: time="2025-10-30T00:04:25.620772368Z" level=info msg="StartContainer for \"a834094696d92742ff5add030b6ca9ce16f3f2530f878be94ce625b85a3e2095\" returns successfully" Oct 30 00:04:25.642839 kubelet[2320]: E1030 00:04:25.642801 2320 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-959986c1c8\" not found" node="ci-4459.1.0-n-959986c1c8" Oct 30 00:04:25.643249 kubelet[2320]: E1030 00:04:25.642930 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:25.647298 containerd[1510]: time="2025-10-30T00:04:25.647178877Z" level=info msg="StartContainer for \"a5d041813030d522e3f4c04fb11678b2fb086e8c3b34d1fcb3273de3db909b52\" returns successfully" Oct 30 00:04:25.661173 containerd[1510]: time="2025-10-30T00:04:25.660466878Z" level=info msg="StartContainer for \"f9f178adb166726cc441f9248a2e31a222afd2552329e2af712fd1bd939b7ce7\" returns successfully" Oct 30 00:04:25.697133 kubelet[2320]: W1030 00:04:25.697046 2320 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.182.197.56:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.182.197.56:6443: connect: connection refused Oct 30 00:04:25.697311 kubelet[2320]: E1030 00:04:25.697141 2320 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.182.197.56:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.182.197.56:6443: connect: connection refused" logger="UnhandledError" Oct 30 00:04:25.757573 kubelet[2320]: W1030 00:04:25.757494 2320 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.182.197.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-n-959986c1c8&limit=500&resourceVersion=0": dial tcp 147.182.197.56:6443: connect: connection refused Oct 30 00:04:25.757794 kubelet[2320]: E1030 00:04:25.757591 2320 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.182.197.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-n-959986c1c8&limit=500&resourceVersion=0\": dial tcp 147.182.197.56:6443: connect: connection refused" logger="UnhandledError" Oct 30 00:04:26.197587 kubelet[2320]: I1030 00:04:26.197169 2320 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-959986c1c8" Oct 30 00:04:26.648585 kubelet[2320]: E1030 00:04:26.648181 2320 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-959986c1c8\" not found" node="ci-4459.1.0-n-959986c1c8" Oct 30 00:04:26.648585 kubelet[2320]: E1030 00:04:26.648413 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:26.651765 kubelet[2320]: E1030 00:04:26.651729 2320 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-959986c1c8\" not found" node="ci-4459.1.0-n-959986c1c8" Oct 30 00:04:26.652831 kubelet[2320]: E1030 00:04:26.652797 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:26.654887 kubelet[2320]: E1030 00:04:26.654803 2320 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-959986c1c8\" not found" node="ci-4459.1.0-n-959986c1c8" Oct 30 00:04:26.655369 kubelet[2320]: E1030 00:04:26.655313 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:27.656265 kubelet[2320]: E1030 00:04:27.656192 2320 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-959986c1c8\" not found" node="ci-4459.1.0-n-959986c1c8" Oct 30 00:04:27.658504 kubelet[2320]: E1030 00:04:27.657598 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:27.659217 kubelet[2320]: E1030 00:04:27.659137 2320 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-959986c1c8\" not found" node="ci-4459.1.0-n-959986c1c8" Oct 30 00:04:27.659707 kubelet[2320]: E1030 00:04:27.659590 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:27.660750 kubelet[2320]: E1030 00:04:27.660705 2320 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.1.0-n-959986c1c8\" not found" node="ci-4459.1.0-n-959986c1c8" Oct 30 00:04:27.819152 kubelet[2320]: I1030 00:04:27.817599 2320 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.1.0-n-959986c1c8" Oct 30 00:04:27.819640 kubelet[2320]: E1030 00:04:27.819274 2320 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459.1.0-n-959986c1c8\": node \"ci-4459.1.0-n-959986c1c8\" not found" Oct 30 00:04:27.847128 kubelet[2320]: E1030 00:04:27.847049 2320 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-959986c1c8\" not found" Oct 30 00:04:27.947869 kubelet[2320]: E1030 00:04:27.947682 2320 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-959986c1c8\" not found" Oct 30 00:04:28.085481 kubelet[2320]: I1030 00:04:28.085425 2320 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:28.097777 kubelet[2320]: E1030 00:04:28.097436 2320 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.1.0-n-959986c1c8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:28.097777 kubelet[2320]: I1030 00:04:28.097491 2320 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:28.100122 kubelet[2320]: E1030 00:04:28.100061 2320 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.1.0-n-959986c1c8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:28.100122 kubelet[2320]: I1030 00:04:28.100110 2320 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:28.102564 kubelet[2320]: E1030 00:04:28.102499 2320 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.1.0-n-959986c1c8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:28.552141 kubelet[2320]: I1030 00:04:28.552026 2320 apiserver.go:52] "Watching apiserver" Oct 30 00:04:28.589031 kubelet[2320]: I1030 00:04:28.588945 2320 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 30 00:04:28.656525 kubelet[2320]: I1030 00:04:28.656491 2320 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:28.668949 kubelet[2320]: W1030 00:04:28.668878 2320 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 30 00:04:28.669630 kubelet[2320]: E1030 00:04:28.669581 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:29.658866 kubelet[2320]: E1030 00:04:29.658772 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:30.205513 systemd[1]: Reload requested from client PID 2595 ('systemctl') (unit session-7.scope)... Oct 30 00:04:30.205878 systemd[1]: Reloading... Oct 30 00:04:30.343146 zram_generator::config[2637]: No configuration found. Oct 30 00:04:30.665882 systemd[1]: Reloading finished in 459 ms. Oct 30 00:04:30.700934 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:04:30.715420 systemd[1]: kubelet.service: Deactivated successfully. Oct 30 00:04:30.715902 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:04:30.715976 systemd[1]: kubelet.service: Consumed 1.347s CPU time, 126.9M memory peak. Oct 30 00:04:30.718804 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:04:30.901210 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:04:30.917623 (kubelet)[2689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 00:04:31.000260 kubelet[2689]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 00:04:31.000260 kubelet[2689]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 00:04:31.000260 kubelet[2689]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 00:04:31.001779 kubelet[2689]: I1030 00:04:31.000953 2689 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 00:04:31.012693 kubelet[2689]: I1030 00:04:31.012638 2689 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 30 00:04:31.012870 kubelet[2689]: I1030 00:04:31.012860 2689 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 00:04:31.013358 kubelet[2689]: I1030 00:04:31.013342 2689 server.go:954] "Client rotation is on, will bootstrap in background" Oct 30 00:04:31.014966 kubelet[2689]: I1030 00:04:31.014941 2689 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 30 00:04:31.024396 kubelet[2689]: I1030 00:04:31.024281 2689 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 00:04:31.031277 kubelet[2689]: I1030 00:04:31.031241 2689 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 30 00:04:31.038761 kubelet[2689]: I1030 00:04:31.038723 2689 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 30 00:04:31.039008 kubelet[2689]: I1030 00:04:31.038963 2689 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 00:04:31.039235 kubelet[2689]: I1030 00:04:31.039009 2689 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.1.0-n-959986c1c8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 00:04:31.039352 kubelet[2689]: I1030 00:04:31.039248 2689 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 00:04:31.039352 kubelet[2689]: I1030 00:04:31.039269 2689 container_manager_linux.go:304] "Creating device plugin manager" Oct 30 00:04:31.039504 kubelet[2689]: I1030 00:04:31.039431 2689 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:04:31.039624 kubelet[2689]: I1030 00:04:31.039610 2689 kubelet.go:446] "Attempting to sync node with API server" Oct 30 00:04:31.039680 kubelet[2689]: I1030 00:04:31.039636 2689 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 00:04:31.039864 kubelet[2689]: I1030 00:04:31.039776 2689 kubelet.go:352] "Adding apiserver pod source" Oct 30 00:04:31.039864 kubelet[2689]: I1030 00:04:31.039796 2689 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 00:04:31.047622 kubelet[2689]: I1030 00:04:31.046488 2689 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 30 00:04:31.048632 kubelet[2689]: I1030 00:04:31.048603 2689 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 30 00:04:31.051423 kubelet[2689]: I1030 00:04:31.051399 2689 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 30 00:04:31.051587 kubelet[2689]: I1030 00:04:31.051575 2689 server.go:1287] "Started kubelet" Oct 30 00:04:31.054372 kubelet[2689]: I1030 00:04:31.054301 2689 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 00:04:31.055982 kubelet[2689]: I1030 00:04:31.055952 2689 server.go:479] "Adding debug handlers to kubelet server" Oct 30 00:04:31.057459 kubelet[2689]: I1030 00:04:31.057432 2689 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 00:04:31.070935 kubelet[2689]: I1030 00:04:31.070737 2689 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 00:04:31.072154 kubelet[2689]: I1030 00:04:31.071498 2689 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 00:04:31.072154 kubelet[2689]: I1030 00:04:31.071803 2689 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 00:04:31.072478 kubelet[2689]: I1030 00:04:31.072460 2689 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 30 00:04:31.072816 kubelet[2689]: E1030 00:04:31.072774 2689 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-959986c1c8\" not found" Oct 30 00:04:31.076435 kubelet[2689]: I1030 00:04:31.076036 2689 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 30 00:04:31.087238 kubelet[2689]: I1030 00:04:31.086824 2689 reconciler.go:26] "Reconciler: start to sync state" Oct 30 00:04:31.088875 kubelet[2689]: I1030 00:04:31.088848 2689 factory.go:221] Registration of the systemd container factory successfully Oct 30 00:04:31.089599 kubelet[2689]: I1030 00:04:31.089569 2689 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 00:04:31.090731 kubelet[2689]: I1030 00:04:31.090648 2689 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 30 00:04:31.094214 kubelet[2689]: I1030 00:04:31.092966 2689 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 30 00:04:31.094214 kubelet[2689]: I1030 00:04:31.092997 2689 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 30 00:04:31.094214 kubelet[2689]: I1030 00:04:31.093017 2689 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 00:04:31.094214 kubelet[2689]: I1030 00:04:31.093023 2689 kubelet.go:2382] "Starting kubelet main sync loop" Oct 30 00:04:31.094214 kubelet[2689]: E1030 00:04:31.093082 2689 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 00:04:31.101155 kubelet[2689]: E1030 00:04:31.100683 2689 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 00:04:31.101155 kubelet[2689]: I1030 00:04:31.100832 2689 factory.go:221] Registration of the containerd container factory successfully Oct 30 00:04:31.179012 kubelet[2689]: I1030 00:04:31.178889 2689 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 00:04:31.179497 kubelet[2689]: I1030 00:04:31.179476 2689 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 00:04:31.179739 kubelet[2689]: I1030 00:04:31.179711 2689 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:04:31.180497 kubelet[2689]: I1030 00:04:31.180394 2689 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 30 00:04:31.180718 kubelet[2689]: I1030 00:04:31.180684 2689 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 30 00:04:31.181290 kubelet[2689]: I1030 00:04:31.180863 2689 policy_none.go:49] "None policy: Start" Oct 30 00:04:31.181290 kubelet[2689]: I1030 00:04:31.180881 2689 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 30 00:04:31.181290 kubelet[2689]: I1030 00:04:31.180895 2689 state_mem.go:35] "Initializing new in-memory state store" Oct 30 00:04:31.181290 kubelet[2689]: I1030 00:04:31.181047 2689 state_mem.go:75] "Updated machine memory state" Oct 30 00:04:31.190162 kubelet[2689]: I1030 00:04:31.189892 2689 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 30 00:04:31.192199 kubelet[2689]: I1030 00:04:31.192172 2689 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 00:04:31.192785 kubelet[2689]: I1030 00:04:31.192746 2689 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 00:04:31.193658 kubelet[2689]: I1030 00:04:31.193503 2689 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 00:04:31.195410 kubelet[2689]: I1030 00:04:31.195365 2689 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:31.200538 kubelet[2689]: I1030 00:04:31.200322 2689 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:31.203137 kubelet[2689]: E1030 00:04:31.202483 2689 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 00:04:31.208576 kubelet[2689]: I1030 00:04:31.203603 2689 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:31.217439 kubelet[2689]: W1030 00:04:31.216849 2689 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 30 00:04:31.227020 kubelet[2689]: W1030 00:04:31.226877 2689 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 30 00:04:31.228997 kubelet[2689]: W1030 00:04:31.228842 2689 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 30 00:04:31.229560 kubelet[2689]: E1030 00:04:31.229168 2689 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.1.0-n-959986c1c8\" already exists" pod="kube-system/kube-apiserver-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:31.288343 kubelet[2689]: I1030 00:04:31.288295 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dabc3e21793fd3624d96ded1d9b74327-kubeconfig\") pod \"kube-scheduler-ci-4459.1.0-n-959986c1c8\" (UID: \"dabc3e21793fd3624d96ded1d9b74327\") " pod="kube-system/kube-scheduler-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:31.288696 kubelet[2689]: I1030 00:04:31.288497 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fb943031b057a97e307a24bb53be2153-ca-certs\") pod \"kube-apiserver-ci-4459.1.0-n-959986c1c8\" (UID: \"fb943031b057a97e307a24bb53be2153\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:31.288696 kubelet[2689]: I1030 00:04:31.288522 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5b39b6432db2bac85e54d99f5f8010da-ca-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-959986c1c8\" (UID: \"5b39b6432db2bac85e54d99f5f8010da\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:31.288696 kubelet[2689]: I1030 00:04:31.288540 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5b39b6432db2bac85e54d99f5f8010da-k8s-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-959986c1c8\" (UID: \"5b39b6432db2bac85e54d99f5f8010da\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:31.288696 kubelet[2689]: I1030 00:04:31.288556 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5b39b6432db2bac85e54d99f5f8010da-kubeconfig\") pod \"kube-controller-manager-ci-4459.1.0-n-959986c1c8\" (UID: \"5b39b6432db2bac85e54d99f5f8010da\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:31.288696 kubelet[2689]: I1030 00:04:31.288574 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5b39b6432db2bac85e54d99f5f8010da-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.1.0-n-959986c1c8\" (UID: \"5b39b6432db2bac85e54d99f5f8010da\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:31.288852 kubelet[2689]: I1030 00:04:31.288594 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5b39b6432db2bac85e54d99f5f8010da-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.1.0-n-959986c1c8\" (UID: \"5b39b6432db2bac85e54d99f5f8010da\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:31.288852 kubelet[2689]: I1030 00:04:31.288609 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fb943031b057a97e307a24bb53be2153-k8s-certs\") pod \"kube-apiserver-ci-4459.1.0-n-959986c1c8\" (UID: \"fb943031b057a97e307a24bb53be2153\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:31.288852 kubelet[2689]: I1030 00:04:31.288625 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fb943031b057a97e307a24bb53be2153-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.1.0-n-959986c1c8\" (UID: \"fb943031b057a97e307a24bb53be2153\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:31.316548 kubelet[2689]: I1030 00:04:31.316304 2689 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-959986c1c8" Oct 30 00:04:31.333205 kubelet[2689]: I1030 00:04:31.333127 2689 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.1.0-n-959986c1c8" Oct 30 00:04:31.333947 kubelet[2689]: I1030 00:04:31.333833 2689 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.1.0-n-959986c1c8" Oct 30 00:04:31.518528 kubelet[2689]: E1030 00:04:31.517655 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:31.530705 kubelet[2689]: E1030 00:04:31.529029 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:31.531111 kubelet[2689]: E1030 00:04:31.531078 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:32.043827 kubelet[2689]: I1030 00:04:32.043257 2689 apiserver.go:52] "Watching apiserver" Oct 30 00:04:32.076850 kubelet[2689]: I1030 00:04:32.076818 2689 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 30 00:04:32.132193 kubelet[2689]: I1030 00:04:32.132039 2689 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:32.132646 kubelet[2689]: I1030 00:04:32.132625 2689 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:32.133200 kubelet[2689]: I1030 00:04:32.133175 2689 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:32.141352 kubelet[2689]: W1030 00:04:32.141315 2689 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 30 00:04:32.141501 kubelet[2689]: E1030 00:04:32.141384 2689 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.1.0-n-959986c1c8\" already exists" pod="kube-system/kube-apiserver-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:32.141580 kubelet[2689]: E1030 00:04:32.141561 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:32.144650 kubelet[2689]: W1030 00:04:32.144191 2689 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 30 00:04:32.144650 kubelet[2689]: E1030 00:04:32.144255 2689 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.1.0-n-959986c1c8\" already exists" pod="kube-system/kube-scheduler-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:32.144650 kubelet[2689]: E1030 00:04:32.144416 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:32.145295 kubelet[2689]: W1030 00:04:32.145276 2689 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 30 00:04:32.145561 kubelet[2689]: E1030 00:04:32.145544 2689 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.1.0-n-959986c1c8\" already exists" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-959986c1c8" Oct 30 00:04:32.145929 kubelet[2689]: E1030 00:04:32.145834 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:32.182199 kubelet[2689]: I1030 00:04:32.181939 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-959986c1c8" podStartSLOduration=1.181918094 podStartE2EDuration="1.181918094s" podCreationTimestamp="2025-10-30 00:04:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:04:32.167743498 +0000 UTC m=+1.243982730" watchObservedRunningTime="2025-10-30 00:04:32.181918094 +0000 UTC m=+1.258157325" Oct 30 00:04:32.183171 kubelet[2689]: I1030 00:04:32.183016 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.1.0-n-959986c1c8" podStartSLOduration=1.182997559 podStartE2EDuration="1.182997559s" podCreationTimestamp="2025-10-30 00:04:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:04:32.181887269 +0000 UTC m=+1.258126502" watchObservedRunningTime="2025-10-30 00:04:32.182997559 +0000 UTC m=+1.259236793" Oct 30 00:04:32.204843 kubelet[2689]: I1030 00:04:32.204746 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.1.0-n-959986c1c8" podStartSLOduration=4.204550136 podStartE2EDuration="4.204550136s" podCreationTimestamp="2025-10-30 00:04:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:04:32.20405186 +0000 UTC m=+1.280291098" watchObservedRunningTime="2025-10-30 00:04:32.204550136 +0000 UTC m=+1.280789375" Oct 30 00:04:33.134844 kubelet[2689]: E1030 00:04:33.134549 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:33.134844 kubelet[2689]: E1030 00:04:33.134778 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:33.135322 kubelet[2689]: E1030 00:04:33.135142 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:34.135659 kubelet[2689]: E1030 00:04:34.135621 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:34.457535 kubelet[2689]: E1030 00:04:34.457408 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:34.953390 kubelet[2689]: I1030 00:04:34.953290 2689 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 30 00:04:34.954200 containerd[1510]: time="2025-10-30T00:04:34.954087400Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 30 00:04:34.955144 kubelet[2689]: I1030 00:04:34.954767 2689 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 30 00:04:35.612090 systemd[1]: Created slice kubepods-besteffort-pode0287187_ac3a_4e5b_9412_a1f6b073cfa7.slice - libcontainer container kubepods-besteffort-pode0287187_ac3a_4e5b_9412_a1f6b073cfa7.slice. Oct 30 00:04:35.716836 kubelet[2689]: I1030 00:04:35.716790 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0287187-ac3a-4e5b-9412-a1f6b073cfa7-xtables-lock\") pod \"kube-proxy-dzlz6\" (UID: \"e0287187-ac3a-4e5b-9412-a1f6b073cfa7\") " pod="kube-system/kube-proxy-dzlz6" Oct 30 00:04:35.717557 kubelet[2689]: I1030 00:04:35.716978 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0287187-ac3a-4e5b-9412-a1f6b073cfa7-lib-modules\") pod \"kube-proxy-dzlz6\" (UID: \"e0287187-ac3a-4e5b-9412-a1f6b073cfa7\") " pod="kube-system/kube-proxy-dzlz6" Oct 30 00:04:35.717557 kubelet[2689]: I1030 00:04:35.717029 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m44z9\" (UniqueName: \"kubernetes.io/projected/e0287187-ac3a-4e5b-9412-a1f6b073cfa7-kube-api-access-m44z9\") pod \"kube-proxy-dzlz6\" (UID: \"e0287187-ac3a-4e5b-9412-a1f6b073cfa7\") " pod="kube-system/kube-proxy-dzlz6" Oct 30 00:04:35.717557 kubelet[2689]: I1030 00:04:35.717067 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e0287187-ac3a-4e5b-9412-a1f6b073cfa7-kube-proxy\") pod \"kube-proxy-dzlz6\" (UID: \"e0287187-ac3a-4e5b-9412-a1f6b073cfa7\") " pod="kube-system/kube-proxy-dzlz6" Oct 30 00:04:35.825671 kubelet[2689]: E1030 00:04:35.825542 2689 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 30 00:04:35.826395 kubelet[2689]: E1030 00:04:35.825956 2689 projected.go:194] Error preparing data for projected volume kube-api-access-m44z9 for pod kube-system/kube-proxy-dzlz6: configmap "kube-root-ca.crt" not found Oct 30 00:04:35.826395 kubelet[2689]: E1030 00:04:35.826054 2689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e0287187-ac3a-4e5b-9412-a1f6b073cfa7-kube-api-access-m44z9 podName:e0287187-ac3a-4e5b-9412-a1f6b073cfa7 nodeName:}" failed. No retries permitted until 2025-10-30 00:04:36.326030314 +0000 UTC m=+5.402269543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m44z9" (UniqueName: "kubernetes.io/projected/e0287187-ac3a-4e5b-9412-a1f6b073cfa7-kube-api-access-m44z9") pod "kube-proxy-dzlz6" (UID: "e0287187-ac3a-4e5b-9412-a1f6b073cfa7") : configmap "kube-root-ca.crt" not found Oct 30 00:04:36.050384 systemd[1]: Created slice kubepods-besteffort-pod1d7a7f3c_73e3_4a53_ba15_f4d896f7fd0b.slice - libcontainer container kubepods-besteffort-pod1d7a7f3c_73e3_4a53_ba15_f4d896f7fd0b.slice. Oct 30 00:04:36.119965 kubelet[2689]: I1030 00:04:36.119813 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1d7a7f3c-73e3-4a53-ba15-f4d896f7fd0b-var-lib-calico\") pod \"tigera-operator-7dcd859c48-6sj72\" (UID: \"1d7a7f3c-73e3-4a53-ba15-f4d896f7fd0b\") " pod="tigera-operator/tigera-operator-7dcd859c48-6sj72" Oct 30 00:04:36.119965 kubelet[2689]: I1030 00:04:36.119910 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xng9z\" (UniqueName: \"kubernetes.io/projected/1d7a7f3c-73e3-4a53-ba15-f4d896f7fd0b-kube-api-access-xng9z\") pod \"tigera-operator-7dcd859c48-6sj72\" (UID: \"1d7a7f3c-73e3-4a53-ba15-f4d896f7fd0b\") " pod="tigera-operator/tigera-operator-7dcd859c48-6sj72" Oct 30 00:04:36.356150 containerd[1510]: time="2025-10-30T00:04:36.356035240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-6sj72,Uid:1d7a7f3c-73e3-4a53-ba15-f4d896f7fd0b,Namespace:tigera-operator,Attempt:0,}" Oct 30 00:04:36.384434 containerd[1510]: time="2025-10-30T00:04:36.384327440Z" level=info msg="connecting to shim 84f6051ac0d0ccf294f8c936f6af554e67c7c9ce44354b846922d8d628736623" address="unix:///run/containerd/s/0c1cff31576b0ac4783b628ee25eaff24afdbfa989754eac07ff1eb9e1c345a6" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:04:36.426141 systemd[1]: Started cri-containerd-84f6051ac0d0ccf294f8c936f6af554e67c7c9ce44354b846922d8d628736623.scope - libcontainer container 84f6051ac0d0ccf294f8c936f6af554e67c7c9ce44354b846922d8d628736623. Oct 30 00:04:36.493166 containerd[1510]: time="2025-10-30T00:04:36.493064475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-6sj72,Uid:1d7a7f3c-73e3-4a53-ba15-f4d896f7fd0b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"84f6051ac0d0ccf294f8c936f6af554e67c7c9ce44354b846922d8d628736623\"" Oct 30 00:04:36.498125 containerd[1510]: time="2025-10-30T00:04:36.497896028Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 30 00:04:36.502254 systemd-resolved[1378]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Oct 30 00:04:36.522686 kubelet[2689]: E1030 00:04:36.522624 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:36.523824 containerd[1510]: time="2025-10-30T00:04:36.523775419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dzlz6,Uid:e0287187-ac3a-4e5b-9412-a1f6b073cfa7,Namespace:kube-system,Attempt:0,}" Oct 30 00:04:36.552766 containerd[1510]: time="2025-10-30T00:04:36.552692671Z" level=info msg="connecting to shim c2cc42975436aae2fca23ebe247b52d6e79c8eeb9ebefee4c525fd9a6ab413f6" address="unix:///run/containerd/s/db44bce75b769c4a64f98c10934672fded1d9a858d04fd0f0dbd4daf089b4cf3" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:04:36.581482 systemd[1]: Started cri-containerd-c2cc42975436aae2fca23ebe247b52d6e79c8eeb9ebefee4c525fd9a6ab413f6.scope - libcontainer container c2cc42975436aae2fca23ebe247b52d6e79c8eeb9ebefee4c525fd9a6ab413f6. Oct 30 00:04:36.630451 containerd[1510]: time="2025-10-30T00:04:36.629722715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dzlz6,Uid:e0287187-ac3a-4e5b-9412-a1f6b073cfa7,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2cc42975436aae2fca23ebe247b52d6e79c8eeb9ebefee4c525fd9a6ab413f6\"" Oct 30 00:04:36.631060 kubelet[2689]: E1030 00:04:36.631030 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:36.635675 containerd[1510]: time="2025-10-30T00:04:36.635560358Z" level=info msg="CreateContainer within sandbox \"c2cc42975436aae2fca23ebe247b52d6e79c8eeb9ebefee4c525fd9a6ab413f6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 30 00:04:36.652240 containerd[1510]: time="2025-10-30T00:04:36.652171089Z" level=info msg="Container d8c83495c17954d34be125fbc35e0d4751e59ae496899136f4f2282c3f981e9d: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:04:36.662215 containerd[1510]: time="2025-10-30T00:04:36.662142354Z" level=info msg="CreateContainer within sandbox \"c2cc42975436aae2fca23ebe247b52d6e79c8eeb9ebefee4c525fd9a6ab413f6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d8c83495c17954d34be125fbc35e0d4751e59ae496899136f4f2282c3f981e9d\"" Oct 30 00:04:36.663861 containerd[1510]: time="2025-10-30T00:04:36.663301269Z" level=info msg="StartContainer for \"d8c83495c17954d34be125fbc35e0d4751e59ae496899136f4f2282c3f981e9d\"" Oct 30 00:04:36.666699 containerd[1510]: time="2025-10-30T00:04:36.666648837Z" level=info msg="connecting to shim d8c83495c17954d34be125fbc35e0d4751e59ae496899136f4f2282c3f981e9d" address="unix:///run/containerd/s/db44bce75b769c4a64f98c10934672fded1d9a858d04fd0f0dbd4daf089b4cf3" protocol=ttrpc version=3 Oct 30 00:04:36.691380 systemd[1]: Started cri-containerd-d8c83495c17954d34be125fbc35e0d4751e59ae496899136f4f2282c3f981e9d.scope - libcontainer container d8c83495c17954d34be125fbc35e0d4751e59ae496899136f4f2282c3f981e9d. Oct 30 00:04:36.750762 containerd[1510]: time="2025-10-30T00:04:36.750601385Z" level=info msg="StartContainer for \"d8c83495c17954d34be125fbc35e0d4751e59ae496899136f4f2282c3f981e9d\" returns successfully" Oct 30 00:04:36.807501 kubelet[2689]: E1030 00:04:36.807457 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:37.146881 kubelet[2689]: E1030 00:04:37.146844 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:37.149276 kubelet[2689]: E1030 00:04:37.149208 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:37.201211 kubelet[2689]: I1030 00:04:37.201135 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dzlz6" podStartSLOduration=2.200873303 podStartE2EDuration="2.200873303s" podCreationTimestamp="2025-10-30 00:04:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:04:37.17708478 +0000 UTC m=+6.253324016" watchObservedRunningTime="2025-10-30 00:04:37.200873303 +0000 UTC m=+6.277112536" Oct 30 00:04:37.596033 kubelet[2689]: E1030 00:04:37.594564 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:38.085425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3626163014.mount: Deactivated successfully. Oct 30 00:04:38.151557 kubelet[2689]: E1030 00:04:38.151412 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:38.151926 kubelet[2689]: E1030 00:04:38.151639 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:38.852674 containerd[1510]: time="2025-10-30T00:04:38.852612735Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:38.853734 containerd[1510]: time="2025-10-30T00:04:38.853543352Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 30 00:04:38.855185 containerd[1510]: time="2025-10-30T00:04:38.854514779Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:38.856763 containerd[1510]: time="2025-10-30T00:04:38.856715621Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:38.857362 containerd[1510]: time="2025-10-30T00:04:38.857331170Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.359393657s" Oct 30 00:04:38.857458 containerd[1510]: time="2025-10-30T00:04:38.857444323Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 30 00:04:38.860466 containerd[1510]: time="2025-10-30T00:04:38.860436922Z" level=info msg="CreateContainer within sandbox \"84f6051ac0d0ccf294f8c936f6af554e67c7c9ce44354b846922d8d628736623\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 30 00:04:38.875140 containerd[1510]: time="2025-10-30T00:04:38.873682608Z" level=info msg="Container 686564f8f12ecd2c1bf11cfc115b5db2629ac0880b3f21dbeed282c21cdd5b2b: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:04:38.885338 containerd[1510]: time="2025-10-30T00:04:38.885285794Z" level=info msg="CreateContainer within sandbox \"84f6051ac0d0ccf294f8c936f6af554e67c7c9ce44354b846922d8d628736623\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"686564f8f12ecd2c1bf11cfc115b5db2629ac0880b3f21dbeed282c21cdd5b2b\"" Oct 30 00:04:38.886464 containerd[1510]: time="2025-10-30T00:04:38.886430621Z" level=info msg="StartContainer for \"686564f8f12ecd2c1bf11cfc115b5db2629ac0880b3f21dbeed282c21cdd5b2b\"" Oct 30 00:04:38.890404 containerd[1510]: time="2025-10-30T00:04:38.890358106Z" level=info msg="connecting to shim 686564f8f12ecd2c1bf11cfc115b5db2629ac0880b3f21dbeed282c21cdd5b2b" address="unix:///run/containerd/s/0c1cff31576b0ac4783b628ee25eaff24afdbfa989754eac07ff1eb9e1c345a6" protocol=ttrpc version=3 Oct 30 00:04:38.921435 systemd[1]: Started cri-containerd-686564f8f12ecd2c1bf11cfc115b5db2629ac0880b3f21dbeed282c21cdd5b2b.scope - libcontainer container 686564f8f12ecd2c1bf11cfc115b5db2629ac0880b3f21dbeed282c21cdd5b2b. Oct 30 00:04:38.965393 containerd[1510]: time="2025-10-30T00:04:38.965259303Z" level=info msg="StartContainer for \"686564f8f12ecd2c1bf11cfc115b5db2629ac0880b3f21dbeed282c21cdd5b2b\" returns successfully" Oct 30 00:04:44.470300 kubelet[2689]: E1030 00:04:44.470228 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:44.504235 kubelet[2689]: I1030 00:04:44.503994 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-6sj72" podStartSLOduration=7.141058885 podStartE2EDuration="9.503965488s" podCreationTimestamp="2025-10-30 00:04:35 +0000 UTC" firstStartedPulling="2025-10-30 00:04:36.495745532 +0000 UTC m=+5.571984744" lastFinishedPulling="2025-10-30 00:04:38.858652137 +0000 UTC m=+7.934891347" observedRunningTime="2025-10-30 00:04:39.174026008 +0000 UTC m=+8.250265244" watchObservedRunningTime="2025-10-30 00:04:44.503965488 +0000 UTC m=+13.580204734" Oct 30 00:04:44.927072 update_engine[1480]: I20251030 00:04:44.926200 1480 update_attempter.cc:509] Updating boot flags... Oct 30 00:04:45.699343 sudo[1761]: pam_unix(sudo:session): session closed for user root Oct 30 00:04:45.702401 sshd[1760]: Connection closed by 139.178.89.65 port 56778 Oct 30 00:04:45.703742 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Oct 30 00:04:45.710269 systemd[1]: sshd@6-147.182.197.56:22-139.178.89.65:56778.service: Deactivated successfully. Oct 30 00:04:45.715176 systemd[1]: session-7.scope: Deactivated successfully. Oct 30 00:04:45.715640 systemd[1]: session-7.scope: Consumed 5.476s CPU time, 160.3M memory peak. Oct 30 00:04:45.720291 systemd-logind[1479]: Session 7 logged out. Waiting for processes to exit. Oct 30 00:04:45.723937 systemd-logind[1479]: Removed session 7. Oct 30 00:04:52.386362 systemd[1]: Created slice kubepods-besteffort-podccd6b8aa_f0ed_46b1_8002_5041982bc3a5.slice - libcontainer container kubepods-besteffort-podccd6b8aa_f0ed_46b1_8002_5041982bc3a5.slice. Oct 30 00:04:52.445731 kubelet[2689]: I1030 00:04:52.445605 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccd6b8aa-f0ed-46b1-8002-5041982bc3a5-tigera-ca-bundle\") pod \"calico-typha-594489d788-v74g5\" (UID: \"ccd6b8aa-f0ed-46b1-8002-5041982bc3a5\") " pod="calico-system/calico-typha-594489d788-v74g5" Oct 30 00:04:52.445731 kubelet[2689]: I1030 00:04:52.445651 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ccd6b8aa-f0ed-46b1-8002-5041982bc3a5-typha-certs\") pod \"calico-typha-594489d788-v74g5\" (UID: \"ccd6b8aa-f0ed-46b1-8002-5041982bc3a5\") " pod="calico-system/calico-typha-594489d788-v74g5" Oct 30 00:04:52.445731 kubelet[2689]: I1030 00:04:52.445676 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6nlw\" (UniqueName: \"kubernetes.io/projected/ccd6b8aa-f0ed-46b1-8002-5041982bc3a5-kube-api-access-x6nlw\") pod \"calico-typha-594489d788-v74g5\" (UID: \"ccd6b8aa-f0ed-46b1-8002-5041982bc3a5\") " pod="calico-system/calico-typha-594489d788-v74g5" Oct 30 00:04:52.627885 systemd[1]: Created slice kubepods-besteffort-podad9c9658_b058_4502_80cd_416e296e2185.slice - libcontainer container kubepods-besteffort-podad9c9658_b058_4502_80cd_416e296e2185.slice. Oct 30 00:04:52.693196 kubelet[2689]: E1030 00:04:52.692947 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:52.694400 containerd[1510]: time="2025-10-30T00:04:52.694363494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-594489d788-v74g5,Uid:ccd6b8aa-f0ed-46b1-8002-5041982bc3a5,Namespace:calico-system,Attempt:0,}" Oct 30 00:04:52.724222 containerd[1510]: time="2025-10-30T00:04:52.724158744Z" level=info msg="connecting to shim fac26ad4202787a1e145b634a9f9f5b8eab70ec701c1cc99d98adfa6c8dea5d7" address="unix:///run/containerd/s/d3e7d64fd9863db891858e5265fb77b1eff9393ec8c71aa9bbef257e487be91b" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:04:52.748466 kubelet[2689]: I1030 00:04:52.748413 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ad9c9658-b058-4502-80cd-416e296e2185-cni-bin-dir\") pod \"calico-node-7f4v7\" (UID: \"ad9c9658-b058-4502-80cd-416e296e2185\") " pod="calico-system/calico-node-7f4v7" Oct 30 00:04:52.748466 kubelet[2689]: I1030 00:04:52.748461 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad9c9658-b058-4502-80cd-416e296e2185-tigera-ca-bundle\") pod \"calico-node-7f4v7\" (UID: \"ad9c9658-b058-4502-80cd-416e296e2185\") " pod="calico-system/calico-node-7f4v7" Oct 30 00:04:52.748466 kubelet[2689]: I1030 00:04:52.748484 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ad9c9658-b058-4502-80cd-416e296e2185-var-lib-calico\") pod \"calico-node-7f4v7\" (UID: \"ad9c9658-b058-4502-80cd-416e296e2185\") " pod="calico-system/calico-node-7f4v7" Oct 30 00:04:52.748813 kubelet[2689]: I1030 00:04:52.748500 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ad9c9658-b058-4502-80cd-416e296e2185-var-run-calico\") pod \"calico-node-7f4v7\" (UID: \"ad9c9658-b058-4502-80cd-416e296e2185\") " pod="calico-system/calico-node-7f4v7" Oct 30 00:04:52.748813 kubelet[2689]: I1030 00:04:52.748519 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ad9c9658-b058-4502-80cd-416e296e2185-cni-net-dir\") pod \"calico-node-7f4v7\" (UID: \"ad9c9658-b058-4502-80cd-416e296e2185\") " pod="calico-system/calico-node-7f4v7" Oct 30 00:04:52.748813 kubelet[2689]: I1030 00:04:52.748536 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ad9c9658-b058-4502-80cd-416e296e2185-flexvol-driver-host\") pod \"calico-node-7f4v7\" (UID: \"ad9c9658-b058-4502-80cd-416e296e2185\") " pod="calico-system/calico-node-7f4v7" Oct 30 00:04:52.748813 kubelet[2689]: I1030 00:04:52.748554 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwvwl\" (UniqueName: \"kubernetes.io/projected/ad9c9658-b058-4502-80cd-416e296e2185-kube-api-access-pwvwl\") pod \"calico-node-7f4v7\" (UID: \"ad9c9658-b058-4502-80cd-416e296e2185\") " pod="calico-system/calico-node-7f4v7" Oct 30 00:04:52.748813 kubelet[2689]: I1030 00:04:52.748569 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ad9c9658-b058-4502-80cd-416e296e2185-cni-log-dir\") pod \"calico-node-7f4v7\" (UID: \"ad9c9658-b058-4502-80cd-416e296e2185\") " pod="calico-system/calico-node-7f4v7" Oct 30 00:04:52.750453 kubelet[2689]: I1030 00:04:52.748587 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad9c9658-b058-4502-80cd-416e296e2185-lib-modules\") pod \"calico-node-7f4v7\" (UID: \"ad9c9658-b058-4502-80cd-416e296e2185\") " pod="calico-system/calico-node-7f4v7" Oct 30 00:04:52.750453 kubelet[2689]: I1030 00:04:52.748603 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ad9c9658-b058-4502-80cd-416e296e2185-policysync\") pod \"calico-node-7f4v7\" (UID: \"ad9c9658-b058-4502-80cd-416e296e2185\") " pod="calico-system/calico-node-7f4v7" Oct 30 00:04:52.750453 kubelet[2689]: I1030 00:04:52.748622 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ad9c9658-b058-4502-80cd-416e296e2185-node-certs\") pod \"calico-node-7f4v7\" (UID: \"ad9c9658-b058-4502-80cd-416e296e2185\") " pod="calico-system/calico-node-7f4v7" Oct 30 00:04:52.750453 kubelet[2689]: I1030 00:04:52.748641 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad9c9658-b058-4502-80cd-416e296e2185-xtables-lock\") pod \"calico-node-7f4v7\" (UID: \"ad9c9658-b058-4502-80cd-416e296e2185\") " pod="calico-system/calico-node-7f4v7" Oct 30 00:04:52.787366 systemd[1]: Started cri-containerd-fac26ad4202787a1e145b634a9f9f5b8eab70ec701c1cc99d98adfa6c8dea5d7.scope - libcontainer container fac26ad4202787a1e145b634a9f9f5b8eab70ec701c1cc99d98adfa6c8dea5d7. Oct 30 00:04:52.821332 kubelet[2689]: E1030 00:04:52.820792 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7vb2j" podUID="06390243-fcd9-4c68-9f88-5b23f795b967" Oct 30 00:04:52.870692 kubelet[2689]: E1030 00:04:52.870614 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.870692 kubelet[2689]: W1030 00:04:52.870650 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.871497 kubelet[2689]: E1030 00:04:52.871477 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.872387 kubelet[2689]: E1030 00:04:52.872350 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.873024 kubelet[2689]: W1030 00:04:52.872632 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.873337 kubelet[2689]: E1030 00:04:52.873093 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.874219 kubelet[2689]: E1030 00:04:52.874018 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.874219 kubelet[2689]: W1030 00:04:52.874034 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.874219 kubelet[2689]: E1030 00:04:52.874056 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.874920 kubelet[2689]: E1030 00:04:52.874856 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.874920 kubelet[2689]: W1030 00:04:52.874871 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.874920 kubelet[2689]: E1030 00:04:52.874890 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.882723 kubelet[2689]: E1030 00:04:52.882625 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.883374 kubelet[2689]: W1030 00:04:52.882975 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.883374 kubelet[2689]: E1030 00:04:52.883009 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.885119 kubelet[2689]: E1030 00:04:52.885077 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.885447 kubelet[2689]: W1030 00:04:52.885209 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.885447 kubelet[2689]: E1030 00:04:52.885237 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.885657 kubelet[2689]: E1030 00:04:52.885598 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.886727 kubelet[2689]: W1030 00:04:52.886702 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.886914 kubelet[2689]: E1030 00:04:52.886898 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.889125 kubelet[2689]: E1030 00:04:52.888405 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.889125 kubelet[2689]: W1030 00:04:52.888423 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.889125 kubelet[2689]: E1030 00:04:52.888439 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.890188 kubelet[2689]: E1030 00:04:52.890173 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.890272 kubelet[2689]: W1030 00:04:52.890262 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.890330 kubelet[2689]: E1030 00:04:52.890321 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.890607 kubelet[2689]: E1030 00:04:52.890597 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.890681 kubelet[2689]: W1030 00:04:52.890666 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.890729 kubelet[2689]: E1030 00:04:52.890721 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.891188 kubelet[2689]: E1030 00:04:52.891130 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.891188 kubelet[2689]: W1030 00:04:52.891143 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.891966 kubelet[2689]: E1030 00:04:52.891824 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.891966 kubelet[2689]: W1030 00:04:52.891835 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.891966 kubelet[2689]: E1030 00:04:52.891853 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.891966 kubelet[2689]: E1030 00:04:52.891886 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.892834 kubelet[2689]: E1030 00:04:52.892590 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.892834 kubelet[2689]: W1030 00:04:52.892602 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.892834 kubelet[2689]: E1030 00:04:52.892614 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.893366 kubelet[2689]: E1030 00:04:52.893353 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.893504 kubelet[2689]: W1030 00:04:52.893490 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.893560 kubelet[2689]: E1030 00:04:52.893551 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.907169 kubelet[2689]: E1030 00:04:52.907137 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.907381 kubelet[2689]: W1030 00:04:52.907305 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.907381 kubelet[2689]: E1030 00:04:52.907335 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.907708 kubelet[2689]: E1030 00:04:52.907696 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.907802 kubelet[2689]: W1030 00:04:52.907789 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.907897 kubelet[2689]: E1030 00:04:52.907884 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.908374 kubelet[2689]: E1030 00:04:52.908336 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.908603 kubelet[2689]: W1030 00:04:52.908352 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.908603 kubelet[2689]: E1030 00:04:52.908467 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.908838 kubelet[2689]: E1030 00:04:52.908798 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.908838 kubelet[2689]: W1030 00:04:52.908810 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.909025 kubelet[2689]: E1030 00:04:52.908921 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.909378 kubelet[2689]: E1030 00:04:52.909277 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.909378 kubelet[2689]: W1030 00:04:52.909296 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.909378 kubelet[2689]: E1030 00:04:52.909306 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.909563 kubelet[2689]: E1030 00:04:52.909517 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.909563 kubelet[2689]: W1030 00:04:52.909526 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.909563 kubelet[2689]: E1030 00:04:52.909535 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.909925 kubelet[2689]: E1030 00:04:52.909825 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.909925 kubelet[2689]: W1030 00:04:52.909834 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.909925 kubelet[2689]: E1030 00:04:52.909843 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.910162 kubelet[2689]: E1030 00:04:52.910073 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.910351 kubelet[2689]: W1030 00:04:52.910227 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.910351 kubelet[2689]: E1030 00:04:52.910241 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.910507 kubelet[2689]: E1030 00:04:52.910462 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.910507 kubelet[2689]: W1030 00:04:52.910473 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.910674 kubelet[2689]: E1030 00:04:52.910483 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.910810 kubelet[2689]: E1030 00:04:52.910767 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.910810 kubelet[2689]: W1030 00:04:52.910778 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.910810 kubelet[2689]: E1030 00:04:52.910786 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.911163 kubelet[2689]: E1030 00:04:52.911149 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.911236 kubelet[2689]: W1030 00:04:52.911223 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.911382 kubelet[2689]: E1030 00:04:52.911296 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.911514 kubelet[2689]: E1030 00:04:52.911474 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.911514 kubelet[2689]: W1030 00:04:52.911484 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.911721 kubelet[2689]: E1030 00:04:52.911492 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.911842 kubelet[2689]: E1030 00:04:52.911831 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.911918 kubelet[2689]: W1030 00:04:52.911908 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.912144 kubelet[2689]: E1030 00:04:52.911972 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.912344 kubelet[2689]: E1030 00:04:52.912316 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.912408 kubelet[2689]: W1030 00:04:52.912396 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.912457 kubelet[2689]: E1030 00:04:52.912448 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.912748 kubelet[2689]: E1030 00:04:52.912662 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.912748 kubelet[2689]: W1030 00:04:52.912671 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.912748 kubelet[2689]: E1030 00:04:52.912680 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.912906 kubelet[2689]: E1030 00:04:52.912897 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.912991 kubelet[2689]: W1030 00:04:52.912981 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.913035 kubelet[2689]: E1030 00:04:52.913027 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.913380 kubelet[2689]: E1030 00:04:52.913308 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.913515 kubelet[2689]: W1030 00:04:52.913319 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.913515 kubelet[2689]: E1030 00:04:52.913445 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.913701 kubelet[2689]: E1030 00:04:52.913618 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.913701 kubelet[2689]: W1030 00:04:52.913625 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.913701 kubelet[2689]: E1030 00:04:52.913632 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.913819 kubelet[2689]: E1030 00:04:52.913809 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.913858 kubelet[2689]: W1030 00:04:52.913850 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.913975 kubelet[2689]: E1030 00:04:52.913900 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.914059 kubelet[2689]: E1030 00:04:52.914050 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.914122 kubelet[2689]: W1030 00:04:52.914107 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.914177 kubelet[2689]: E1030 00:04:52.914169 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.917766 containerd[1510]: time="2025-10-30T00:04:52.917724790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-594489d788-v74g5,Uid:ccd6b8aa-f0ed-46b1-8002-5041982bc3a5,Namespace:calico-system,Attempt:0,} returns sandbox id \"fac26ad4202787a1e145b634a9f9f5b8eab70ec701c1cc99d98adfa6c8dea5d7\"" Oct 30 00:04:52.918898 kubelet[2689]: E1030 00:04:52.918878 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:52.919994 containerd[1510]: time="2025-10-30T00:04:52.919938791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 30 00:04:52.933019 kubelet[2689]: E1030 00:04:52.932959 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:52.935353 containerd[1510]: time="2025-10-30T00:04:52.935086187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7f4v7,Uid:ad9c9658-b058-4502-80cd-416e296e2185,Namespace:calico-system,Attempt:0,}" Oct 30 00:04:52.953721 kubelet[2689]: E1030 00:04:52.953269 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.953721 kubelet[2689]: W1030 00:04:52.953298 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.953721 kubelet[2689]: E1030 00:04:52.953321 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.953721 kubelet[2689]: I1030 00:04:52.953372 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/06390243-fcd9-4c68-9f88-5b23f795b967-registration-dir\") pod \"csi-node-driver-7vb2j\" (UID: \"06390243-fcd9-4c68-9f88-5b23f795b967\") " pod="calico-system/csi-node-driver-7vb2j" Oct 30 00:04:52.954738 kubelet[2689]: E1030 00:04:52.954360 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.954738 kubelet[2689]: W1030 00:04:52.954378 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.954738 kubelet[2689]: E1030 00:04:52.954430 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.954738 kubelet[2689]: I1030 00:04:52.954457 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x52zb\" (UniqueName: \"kubernetes.io/projected/06390243-fcd9-4c68-9f88-5b23f795b967-kube-api-access-x52zb\") pod \"csi-node-driver-7vb2j\" (UID: \"06390243-fcd9-4c68-9f88-5b23f795b967\") " pod="calico-system/csi-node-driver-7vb2j" Oct 30 00:04:52.955713 kubelet[2689]: E1030 00:04:52.955637 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.955713 kubelet[2689]: W1030 00:04:52.955657 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.955713 kubelet[2689]: E1030 00:04:52.955683 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.956933 kubelet[2689]: I1030 00:04:52.956857 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/06390243-fcd9-4c68-9f88-5b23f795b967-varrun\") pod \"csi-node-driver-7vb2j\" (UID: \"06390243-fcd9-4c68-9f88-5b23f795b967\") " pod="calico-system/csi-node-driver-7vb2j" Oct 30 00:04:52.957276 kubelet[2689]: E1030 00:04:52.957041 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.957276 kubelet[2689]: W1030 00:04:52.957074 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.957276 kubelet[2689]: E1030 00:04:52.957135 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.959010 kubelet[2689]: E1030 00:04:52.957381 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.959010 kubelet[2689]: W1030 00:04:52.957396 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.959010 kubelet[2689]: E1030 00:04:52.957511 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.959010 kubelet[2689]: E1030 00:04:52.957693 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.959010 kubelet[2689]: W1030 00:04:52.957702 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.959010 kubelet[2689]: E1030 00:04:52.958124 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.959010 kubelet[2689]: W1030 00:04:52.958135 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.959010 kubelet[2689]: E1030 00:04:52.958148 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.959010 kubelet[2689]: I1030 00:04:52.958173 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/06390243-fcd9-4c68-9f88-5b23f795b967-socket-dir\") pod \"csi-node-driver-7vb2j\" (UID: \"06390243-fcd9-4c68-9f88-5b23f795b967\") " pod="calico-system/csi-node-driver-7vb2j" Oct 30 00:04:52.959010 kubelet[2689]: E1030 00:04:52.958597 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.959499 kubelet[2689]: W1030 00:04:52.958609 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.959499 kubelet[2689]: E1030 00:04:52.958656 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.959499 kubelet[2689]: E1030 00:04:52.958621 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.959499 kubelet[2689]: I1030 00:04:52.958696 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06390243-fcd9-4c68-9f88-5b23f795b967-kubelet-dir\") pod \"csi-node-driver-7vb2j\" (UID: \"06390243-fcd9-4c68-9f88-5b23f795b967\") " pod="calico-system/csi-node-driver-7vb2j" Oct 30 00:04:52.959499 kubelet[2689]: E1030 00:04:52.959269 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.960797 kubelet[2689]: W1030 00:04:52.960264 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.960797 kubelet[2689]: E1030 00:04:52.960295 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.963190 kubelet[2689]: E1030 00:04:52.963057 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.964310 kubelet[2689]: W1030 00:04:52.963690 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.964310 kubelet[2689]: E1030 00:04:52.963743 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.964988 kubelet[2689]: E1030 00:04:52.964827 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.965264 kubelet[2689]: W1030 00:04:52.965077 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.965518 kubelet[2689]: E1030 00:04:52.965413 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.966438 kubelet[2689]: E1030 00:04:52.966420 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.966691 kubelet[2689]: W1030 00:04:52.966514 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.966691 kubelet[2689]: E1030 00:04:52.966559 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.967487 kubelet[2689]: E1030 00:04:52.967470 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.968130 containerd[1510]: time="2025-10-30T00:04:52.967549371Z" level=info msg="connecting to shim 7663d2c47fb6cc44fc90f5f3c28f41d5ad293700460f66312eadf7b8353c3a2e" address="unix:///run/containerd/s/cb38b73afd07fef9ca128ab43f8b80fd7a319841c00e27afcfcddfb5e07e3cf0" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:04:52.968262 kubelet[2689]: W1030 00:04:52.967631 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.968262 kubelet[2689]: E1030 00:04:52.967662 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.969127 kubelet[2689]: E1030 00:04:52.968403 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.969127 kubelet[2689]: W1030 00:04:52.968615 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.969127 kubelet[2689]: E1030 00:04:52.968634 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:52.969323 kubelet[2689]: E1030 00:04:52.969292 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:52.969482 kubelet[2689]: W1030 00:04:52.969430 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:52.969677 kubelet[2689]: E1030 00:04:52.969596 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:53.001392 systemd[1]: Started cri-containerd-7663d2c47fb6cc44fc90f5f3c28f41d5ad293700460f66312eadf7b8353c3a2e.scope - libcontainer container 7663d2c47fb6cc44fc90f5f3c28f41d5ad293700460f66312eadf7b8353c3a2e. Oct 30 00:04:53.062915 kubelet[2689]: E1030 00:04:53.062869 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:53.062915 kubelet[2689]: W1030 00:04:53.062907 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:53.063283 kubelet[2689]: E1030 00:04:53.062941 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:53.063371 kubelet[2689]: E1030 00:04:53.063348 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:53.063371 kubelet[2689]: W1030 00:04:53.063364 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:53.063455 kubelet[2689]: E1030 00:04:53.063392 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:53.063691 kubelet[2689]: E1030 00:04:53.063671 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:53.063726 kubelet[2689]: W1030 00:04:53.063693 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:53.063726 kubelet[2689]: E1030 00:04:53.063730 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:53.065262 kubelet[2689]: E1030 00:04:53.065237 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:53.065262 kubelet[2689]: W1030 00:04:53.065261 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:53.065363 kubelet[2689]: E1030 00:04:53.065288 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:53.065530 kubelet[2689]: E1030 00:04:53.065516 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:53.065563 kubelet[2689]: W1030 00:04:53.065533 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:53.065637 kubelet[2689]: E1030 00:04:53.065618 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:53.065800 kubelet[2689]: E1030 00:04:53.065784 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:53.065835 kubelet[2689]: W1030 00:04:53.065808 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:53.065898 kubelet[2689]: E1030 00:04:53.065879 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:53.066029 kubelet[2689]: E1030 00:04:53.066015 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:53.066061 kubelet[2689]: W1030 00:04:53.066031 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:53.066141 kubelet[2689]: E1030 00:04:53.066125 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:53.066495 kubelet[2689]: E1030 00:04:53.066471 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:53.066495 kubelet[2689]: W1030 00:04:53.066493 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:53.066664 kubelet[2689]: E1030 00:04:53.066580 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:53.066716 kubelet[2689]: E1030 00:04:53.066696 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:53.066744 kubelet[2689]: W1030 00:04:53.066716 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:53.066767 kubelet[2689]: E1030 00:04:53.066747 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:53.066985 kubelet[2689]: E1030 00:04:53.066969 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:53.067019 kubelet[2689]: W1030 00:04:53.066988 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:53.067019 kubelet[2689]: E1030 00:04:53.067009 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:53.067245 kubelet[2689]: E1030 00:04:53.067230 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:53.067288 kubelet[2689]: W1030 00:04:53.067246 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:53.067374 kubelet[2689]: E1030 00:04:53.067356 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:53.067549 kubelet[2689]: E1030 00:04:53.067534 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:53.067587 kubelet[2689]: W1030 00:04:53.067551 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:53.067709 kubelet[2689]: E1030 00:04:53.067622 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:53.068470 kubelet[2689]: E1030 00:04:53.068383 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:53.068470 kubelet[2689]: W1030 00:04:53.068467 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:53.068571 kubelet[2689]: E1030 00:04:53.068557 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:53.068771 kubelet[2689]: E1030 00:04:53.068750 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:53.068771 kubelet[2689]: W1030 00:04:53.068768 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:53.068869 kubelet[2689]: E1030 00:04:53.068847 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:53.069077 kubelet[2689]: E1030 00:04:53.069061 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:53.069077 kubelet[2689]: W1030 00:04:53.069077 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:53.069355 kubelet[2689]: E1030 00:04:53.069331 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:53.069837 kubelet[2689]: E1030 00:04:53.069816 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:53.069837 kubelet[2689]: W1030 00:04:53.069836 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:53.070174 kubelet[2689]: E1030 00:04:53.070150 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:53.070857 kubelet[2689]: E1030 00:04:53.070832 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:53.070857 kubelet[2689]: W1030 00:04:53.070852 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:53.071227 kubelet[2689]: E1030 00:04:53.071200 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:53.071379 kubelet[2689]: E1030 00:04:53.071362 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:53.071409 kubelet[2689]: W1030 00:04:53.071382 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:53.071464 kubelet[2689]: E1030 00:04:53.071446 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:53.073784 kubelet[2689]: E1030 00:04:53.073612 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:53.073784 kubelet[2689]: W1030 00:04:53.073664 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:53.073784 kubelet[2689]: E1030 00:04:53.073725 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:53.073911 kubelet[2689]: E1030 00:04:53.073891 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:53.073911 kubelet[2689]: W1030 00:04:53.073902 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:53.074305 kubelet[2689]: E1030 00:04:53.073943 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:53.074546 kubelet[2689]: E1030 00:04:53.074518 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:53.074546 kubelet[2689]: W1030 00:04:53.074539 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:53.074693 kubelet[2689]: E1030 00:04:53.074636 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:53.074838 kubelet[2689]: E1030 00:04:53.074821 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:53.074879 kubelet[2689]: W1030 00:04:53.074840 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:53.074879 kubelet[2689]: E1030 00:04:53.074857 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:53.075333 kubelet[2689]: E1030 00:04:53.075257 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:53.075333 kubelet[2689]: W1030 00:04:53.075276 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:53.075333 kubelet[2689]: E1030 00:04:53.075295 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:53.076124 kubelet[2689]: E1030 00:04:53.075955 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:53.076124 kubelet[2689]: W1030 00:04:53.076071 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:53.076198 kubelet[2689]: E1030 00:04:53.076184 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:53.076696 kubelet[2689]: E1030 00:04:53.076677 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:53.076696 kubelet[2689]: W1030 00:04:53.076695 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:53.076833 kubelet[2689]: E1030 00:04:53.076710 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:53.079013 containerd[1510]: time="2025-10-30T00:04:53.078970605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7f4v7,Uid:ad9c9658-b058-4502-80cd-416e296e2185,Namespace:calico-system,Attempt:0,} returns sandbox id \"7663d2c47fb6cc44fc90f5f3c28f41d5ad293700460f66312eadf7b8353c3a2e\"" Oct 30 00:04:53.081228 kubelet[2689]: E1030 00:04:53.081049 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:53.091493 kubelet[2689]: E1030 00:04:53.091456 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:53.091493 kubelet[2689]: W1030 00:04:53.091487 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:53.091634 kubelet[2689]: E1030 00:04:53.091514 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:54.094296 kubelet[2689]: E1030 00:04:54.093839 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7vb2j" podUID="06390243-fcd9-4c68-9f88-5b23f795b967" Oct 30 00:04:54.246547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3048649926.mount: Deactivated successfully. Oct 30 00:04:55.232342 containerd[1510]: time="2025-10-30T00:04:55.232228881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:55.233427 containerd[1510]: time="2025-10-30T00:04:55.232951448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 30 00:04:55.234080 containerd[1510]: time="2025-10-30T00:04:55.234049677Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:55.235812 containerd[1510]: time="2025-10-30T00:04:55.235779277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:55.236773 containerd[1510]: time="2025-10-30T00:04:55.236739186Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.316759251s" Oct 30 00:04:55.236960 containerd[1510]: time="2025-10-30T00:04:55.236863818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 30 00:04:55.238177 containerd[1510]: time="2025-10-30T00:04:55.238158905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 30 00:04:55.265230 containerd[1510]: time="2025-10-30T00:04:55.265189272Z" level=info msg="CreateContainer within sandbox \"fac26ad4202787a1e145b634a9f9f5b8eab70ec701c1cc99d98adfa6c8dea5d7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 30 00:04:55.276129 containerd[1510]: time="2025-10-30T00:04:55.273510183Z" level=info msg="Container 4a84b141dee78c0b76a0e5ec25f0830236424a38a3e3b03dc8de790ca4bdae99: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:04:55.289648 containerd[1510]: time="2025-10-30T00:04:55.289606503Z" level=info msg="CreateContainer within sandbox \"fac26ad4202787a1e145b634a9f9f5b8eab70ec701c1cc99d98adfa6c8dea5d7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4a84b141dee78c0b76a0e5ec25f0830236424a38a3e3b03dc8de790ca4bdae99\"" Oct 30 00:04:55.290502 containerd[1510]: time="2025-10-30T00:04:55.290471250Z" level=info msg="StartContainer for \"4a84b141dee78c0b76a0e5ec25f0830236424a38a3e3b03dc8de790ca4bdae99\"" Oct 30 00:04:55.315872 containerd[1510]: time="2025-10-30T00:04:55.315805084Z" level=info msg="connecting to shim 4a84b141dee78c0b76a0e5ec25f0830236424a38a3e3b03dc8de790ca4bdae99" address="unix:///run/containerd/s/d3e7d64fd9863db891858e5265fb77b1eff9393ec8c71aa9bbef257e487be91b" protocol=ttrpc version=3 Oct 30 00:04:55.346407 systemd[1]: Started cri-containerd-4a84b141dee78c0b76a0e5ec25f0830236424a38a3e3b03dc8de790ca4bdae99.scope - libcontainer container 4a84b141dee78c0b76a0e5ec25f0830236424a38a3e3b03dc8de790ca4bdae99. Oct 30 00:04:55.433234 containerd[1510]: time="2025-10-30T00:04:55.433167506Z" level=info msg="StartContainer for \"4a84b141dee78c0b76a0e5ec25f0830236424a38a3e3b03dc8de790ca4bdae99\" returns successfully" Oct 30 00:04:56.093921 kubelet[2689]: E1030 00:04:56.093831 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7vb2j" podUID="06390243-fcd9-4c68-9f88-5b23f795b967" Oct 30 00:04:56.216979 kubelet[2689]: E1030 00:04:56.216848 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:56.238129 kubelet[2689]: E1030 00:04:56.237711 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.238129 kubelet[2689]: W1030 00:04:56.237768 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.238129 kubelet[2689]: E1030 00:04:56.237797 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.239268 kubelet[2689]: E1030 00:04:56.238846 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.239268 kubelet[2689]: W1030 00:04:56.238868 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.239268 kubelet[2689]: E1030 00:04:56.238890 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.239779 kubelet[2689]: I1030 00:04:56.239595 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-594489d788-v74g5" podStartSLOduration=1.921293352 podStartE2EDuration="4.239571441s" podCreationTimestamp="2025-10-30 00:04:52 +0000 UTC" firstStartedPulling="2025-10-30 00:04:52.919589486 +0000 UTC m=+21.995828696" lastFinishedPulling="2025-10-30 00:04:55.237867562 +0000 UTC m=+24.314106785" observedRunningTime="2025-10-30 00:04:56.236870466 +0000 UTC m=+25.313109698" watchObservedRunningTime="2025-10-30 00:04:56.239571441 +0000 UTC m=+25.315810670" Oct 30 00:04:56.241012 kubelet[2689]: E1030 00:04:56.240603 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.241012 kubelet[2689]: W1030 00:04:56.240620 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.241012 kubelet[2689]: E1030 00:04:56.240641 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.241618 kubelet[2689]: E1030 00:04:56.241570 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.241618 kubelet[2689]: W1030 00:04:56.241614 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.241741 kubelet[2689]: E1030 00:04:56.241631 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.242308 kubelet[2689]: E1030 00:04:56.241881 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.242308 kubelet[2689]: W1030 00:04:56.241896 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.242308 kubelet[2689]: E1030 00:04:56.241936 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.242819 kubelet[2689]: E1030 00:04:56.242490 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.242819 kubelet[2689]: W1030 00:04:56.242501 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.242819 kubelet[2689]: E1030 00:04:56.242639 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.242953 kubelet[2689]: E1030 00:04:56.242905 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.242953 kubelet[2689]: W1030 00:04:56.242914 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.242953 kubelet[2689]: E1030 00:04:56.242923 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.243514 kubelet[2689]: E1030 00:04:56.243151 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.243514 kubelet[2689]: W1030 00:04:56.243165 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.243514 kubelet[2689]: E1030 00:04:56.243174 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.243514 kubelet[2689]: E1030 00:04:56.243372 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.243514 kubelet[2689]: W1030 00:04:56.243379 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.243514 kubelet[2689]: E1030 00:04:56.243389 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.243821 kubelet[2689]: E1030 00:04:56.243580 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.243821 kubelet[2689]: W1030 00:04:56.243590 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.243821 kubelet[2689]: E1030 00:04:56.243602 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.243949 kubelet[2689]: E1030 00:04:56.243829 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.243949 kubelet[2689]: W1030 00:04:56.243837 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.243949 kubelet[2689]: E1030 00:04:56.243846 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.244393 kubelet[2689]: E1030 00:04:56.244094 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.244393 kubelet[2689]: W1030 00:04:56.244150 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.244393 kubelet[2689]: E1030 00:04:56.244160 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.244570 kubelet[2689]: E1030 00:04:56.244462 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.244570 kubelet[2689]: W1030 00:04:56.244472 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.244570 kubelet[2689]: E1030 00:04:56.244481 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.244686 kubelet[2689]: E1030 00:04:56.244633 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.244686 kubelet[2689]: W1030 00:04:56.244640 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.244686 kubelet[2689]: E1030 00:04:56.244658 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.244835 kubelet[2689]: E1030 00:04:56.244809 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.244835 kubelet[2689]: W1030 00:04:56.244823 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.244835 kubelet[2689]: E1030 00:04:56.244830 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.291579 kubelet[2689]: E1030 00:04:56.291531 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.291579 kubelet[2689]: W1030 00:04:56.291568 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.291782 kubelet[2689]: E1030 00:04:56.291606 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.291883 kubelet[2689]: E1030 00:04:56.291863 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.291883 kubelet[2689]: W1030 00:04:56.291876 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.291964 kubelet[2689]: E1030 00:04:56.291946 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.292230 kubelet[2689]: E1030 00:04:56.292209 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.292230 kubelet[2689]: W1030 00:04:56.292227 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.292295 kubelet[2689]: E1030 00:04:56.292257 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.292626 kubelet[2689]: E1030 00:04:56.292608 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.292626 kubelet[2689]: W1030 00:04:56.292625 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.292735 kubelet[2689]: E1030 00:04:56.292713 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.292915 kubelet[2689]: E1030 00:04:56.292900 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.292948 kubelet[2689]: W1030 00:04:56.292916 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.292948 kubelet[2689]: E1030 00:04:56.292938 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.293179 kubelet[2689]: E1030 00:04:56.293162 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.293235 kubelet[2689]: W1030 00:04:56.293180 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.293277 kubelet[2689]: E1030 00:04:56.293268 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.293416 kubelet[2689]: E1030 00:04:56.293404 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.293416 kubelet[2689]: W1030 00:04:56.293415 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.293512 kubelet[2689]: E1030 00:04:56.293498 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.293619 kubelet[2689]: E1030 00:04:56.293607 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.293619 kubelet[2689]: W1030 00:04:56.293619 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.293685 kubelet[2689]: E1030 00:04:56.293633 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.293772 kubelet[2689]: E1030 00:04:56.293756 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.293772 kubelet[2689]: W1030 00:04:56.293767 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.293846 kubelet[2689]: E1030 00:04:56.293784 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.293971 kubelet[2689]: E1030 00:04:56.293957 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.293971 kubelet[2689]: W1030 00:04:56.293968 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.294031 kubelet[2689]: E1030 00:04:56.293985 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.294213 kubelet[2689]: E1030 00:04:56.294199 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.294213 kubelet[2689]: W1030 00:04:56.294211 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.294279 kubelet[2689]: E1030 00:04:56.294220 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.294525 kubelet[2689]: E1030 00:04:56.294512 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.294525 kubelet[2689]: W1030 00:04:56.294524 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.294646 kubelet[2689]: E1030 00:04:56.294532 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.294687 kubelet[2689]: E1030 00:04:56.294671 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.294687 kubelet[2689]: W1030 00:04:56.294678 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.294687 kubelet[2689]: E1030 00:04:56.294685 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.295384 kubelet[2689]: E1030 00:04:56.295325 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.295384 kubelet[2689]: W1030 00:04:56.295349 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.295561 kubelet[2689]: E1030 00:04:56.295412 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.295656 kubelet[2689]: E1030 00:04:56.295632 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.295656 kubelet[2689]: W1030 00:04:56.295648 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.295938 kubelet[2689]: E1030 00:04:56.295681 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.295938 kubelet[2689]: E1030 00:04:56.295779 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.295938 kubelet[2689]: W1030 00:04:56.295785 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.296015 kubelet[2689]: E1030 00:04:56.295966 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.296015 kubelet[2689]: W1030 00:04:56.295974 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.296015 kubelet[2689]: E1030 00:04:56.295983 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.296015 kubelet[2689]: E1030 00:04:56.296014 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.296514 kubelet[2689]: E1030 00:04:56.296447 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:04:56.296514 kubelet[2689]: W1030 00:04:56.296467 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:04:56.296514 kubelet[2689]: E1030 00:04:56.296481 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:04:56.696258 containerd[1510]: time="2025-10-30T00:04:56.696137246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:56.697318 containerd[1510]: time="2025-10-30T00:04:56.697275793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 30 00:04:56.698162 containerd[1510]: time="2025-10-30T00:04:56.698125329Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:56.701480 containerd[1510]: time="2025-10-30T00:04:56.701435351Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:56.702270 containerd[1510]: time="2025-10-30T00:04:56.701760785Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.463151823s" Oct 30 00:04:56.702270 containerd[1510]: time="2025-10-30T00:04:56.701790883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 30 00:04:56.722587 containerd[1510]: time="2025-10-30T00:04:56.706179093Z" level=info msg="CreateContainer within sandbox \"7663d2c47fb6cc44fc90f5f3c28f41d5ad293700460f66312eadf7b8353c3a2e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 30 00:04:56.732983 containerd[1510]: time="2025-10-30T00:04:56.732846036Z" level=info msg="Container 3453425698d8c6083cb800e96289a665267da755e1f47b4277f8e825b6e66566: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:04:56.799573 containerd[1510]: time="2025-10-30T00:04:56.799489469Z" level=info msg="CreateContainer within sandbox \"7663d2c47fb6cc44fc90f5f3c28f41d5ad293700460f66312eadf7b8353c3a2e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3453425698d8c6083cb800e96289a665267da755e1f47b4277f8e825b6e66566\"" Oct 30 00:04:56.800415 containerd[1510]: time="2025-10-30T00:04:56.800359336Z" level=info msg="StartContainer for \"3453425698d8c6083cb800e96289a665267da755e1f47b4277f8e825b6e66566\"" Oct 30 00:04:56.802439 containerd[1510]: time="2025-10-30T00:04:56.802385772Z" level=info msg="connecting to shim 3453425698d8c6083cb800e96289a665267da755e1f47b4277f8e825b6e66566" address="unix:///run/containerd/s/cb38b73afd07fef9ca128ab43f8b80fd7a319841c00e27afcfcddfb5e07e3cf0" protocol=ttrpc version=3 Oct 30 00:04:56.842334 systemd[1]: Started cri-containerd-3453425698d8c6083cb800e96289a665267da755e1f47b4277f8e825b6e66566.scope - libcontainer container 3453425698d8c6083cb800e96289a665267da755e1f47b4277f8e825b6e66566. Oct 30 00:04:56.920886 containerd[1510]: time="2025-10-30T00:04:56.920836580Z" level=info msg="StartContainer for \"3453425698d8c6083cb800e96289a665267da755e1f47b4277f8e825b6e66566\" returns successfully" Oct 30 00:04:56.934943 systemd[1]: cri-containerd-3453425698d8c6083cb800e96289a665267da755e1f47b4277f8e825b6e66566.scope: Deactivated successfully. Oct 30 00:04:56.956496 containerd[1510]: time="2025-10-30T00:04:56.956144393Z" level=info msg="received exit event container_id:\"3453425698d8c6083cb800e96289a665267da755e1f47b4277f8e825b6e66566\" id:\"3453425698d8c6083cb800e96289a665267da755e1f47b4277f8e825b6e66566\" pid:3374 exited_at:{seconds:1761782696 nanos:937389920}" Oct 30 00:04:56.959937 containerd[1510]: time="2025-10-30T00:04:56.959889946Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3453425698d8c6083cb800e96289a665267da755e1f47b4277f8e825b6e66566\" id:\"3453425698d8c6083cb800e96289a665267da755e1f47b4277f8e825b6e66566\" pid:3374 exited_at:{seconds:1761782696 nanos:937389920}" Oct 30 00:04:57.001748 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3453425698d8c6083cb800e96289a665267da755e1f47b4277f8e825b6e66566-rootfs.mount: Deactivated successfully. Oct 30 00:04:57.222219 kubelet[2689]: E1030 00:04:57.221989 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:57.224870 kubelet[2689]: I1030 00:04:57.222991 2689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 30 00:04:57.224870 kubelet[2689]: E1030 00:04:57.223375 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:04:57.229177 containerd[1510]: time="2025-10-30T00:04:57.227957513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 30 00:04:58.094161 kubelet[2689]: E1030 00:04:58.094085 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7vb2j" podUID="06390243-fcd9-4c68-9f88-5b23f795b967" Oct 30 00:05:00.094450 kubelet[2689]: E1030 00:05:00.094382 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7vb2j" podUID="06390243-fcd9-4c68-9f88-5b23f795b967" Oct 30 00:05:01.750203 containerd[1510]: time="2025-10-30T00:05:01.750091855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:05:01.751936 containerd[1510]: time="2025-10-30T00:05:01.751858150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 30 00:05:01.753985 containerd[1510]: time="2025-10-30T00:05:01.752740668Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:05:01.758027 containerd[1510]: time="2025-10-30T00:05:01.757975855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:05:01.758738 containerd[1510]: time="2025-10-30T00:05:01.758700216Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.530697281s" Oct 30 00:05:01.758738 containerd[1510]: time="2025-10-30T00:05:01.758741741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 30 00:05:01.764817 containerd[1510]: time="2025-10-30T00:05:01.764750925Z" level=info msg="CreateContainer within sandbox \"7663d2c47fb6cc44fc90f5f3c28f41d5ad293700460f66312eadf7b8353c3a2e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 30 00:05:01.783819 containerd[1510]: time="2025-10-30T00:05:01.783750934Z" level=info msg="Container 238b3a4643424808c200876f48a73d2d0358136a5bb6ac1aa48377421f46fada: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:05:01.809994 containerd[1510]: time="2025-10-30T00:05:01.809691554Z" level=info msg="CreateContainer within sandbox \"7663d2c47fb6cc44fc90f5f3c28f41d5ad293700460f66312eadf7b8353c3a2e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"238b3a4643424808c200876f48a73d2d0358136a5bb6ac1aa48377421f46fada\"" Oct 30 00:05:01.812580 containerd[1510]: time="2025-10-30T00:05:01.812343369Z" level=info msg="StartContainer for \"238b3a4643424808c200876f48a73d2d0358136a5bb6ac1aa48377421f46fada\"" Oct 30 00:05:01.823320 containerd[1510]: time="2025-10-30T00:05:01.823014782Z" level=info msg="connecting to shim 238b3a4643424808c200876f48a73d2d0358136a5bb6ac1aa48377421f46fada" address="unix:///run/containerd/s/cb38b73afd07fef9ca128ab43f8b80fd7a319841c00e27afcfcddfb5e07e3cf0" protocol=ttrpc version=3 Oct 30 00:05:01.875472 systemd[1]: Started cri-containerd-238b3a4643424808c200876f48a73d2d0358136a5bb6ac1aa48377421f46fada.scope - libcontainer container 238b3a4643424808c200876f48a73d2d0358136a5bb6ac1aa48377421f46fada. Oct 30 00:05:01.961928 containerd[1510]: time="2025-10-30T00:05:01.961852834Z" level=info msg="StartContainer for \"238b3a4643424808c200876f48a73d2d0358136a5bb6ac1aa48377421f46fada\" returns successfully" Oct 30 00:05:02.094472 kubelet[2689]: E1030 00:05:02.094382 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7vb2j" podUID="06390243-fcd9-4c68-9f88-5b23f795b967" Oct 30 00:05:02.253574 kubelet[2689]: E1030 00:05:02.252313 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:05:02.821455 systemd[1]: cri-containerd-238b3a4643424808c200876f48a73d2d0358136a5bb6ac1aa48377421f46fada.scope: Deactivated successfully. Oct 30 00:05:02.822442 systemd[1]: cri-containerd-238b3a4643424808c200876f48a73d2d0358136a5bb6ac1aa48377421f46fada.scope: Consumed 735ms CPU time, 163.2M memory peak, 12.2M read from disk, 171.3M written to disk. Oct 30 00:05:02.832224 containerd[1510]: time="2025-10-30T00:05:02.832156101Z" level=info msg="received exit event container_id:\"238b3a4643424808c200876f48a73d2d0358136a5bb6ac1aa48377421f46fada\" id:\"238b3a4643424808c200876f48a73d2d0358136a5bb6ac1aa48377421f46fada\" pid:3433 exited_at:{seconds:1761782702 nanos:829865384}" Oct 30 00:05:02.835524 containerd[1510]: time="2025-10-30T00:05:02.835447138Z" level=info msg="TaskExit event in podsandbox handler container_id:\"238b3a4643424808c200876f48a73d2d0358136a5bb6ac1aa48377421f46fada\" id:\"238b3a4643424808c200876f48a73d2d0358136a5bb6ac1aa48377421f46fada\" pid:3433 exited_at:{seconds:1761782702 nanos:829865384}" Oct 30 00:05:02.876647 kubelet[2689]: I1030 00:05:02.874785 2689 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 30 00:05:02.892854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-238b3a4643424808c200876f48a73d2d0358136a5bb6ac1aa48377421f46fada-rootfs.mount: Deactivated successfully. Oct 30 00:05:02.960031 systemd[1]: Created slice kubepods-besteffort-pod041ed311_1a2e_462d_ace8_65f00add4557.slice - libcontainer container kubepods-besteffort-pod041ed311_1a2e_462d_ace8_65f00add4557.slice. Oct 30 00:05:03.016250 systemd[1]: Created slice kubepods-besteffort-pod7ce87a9a_4a9f_4e2a_b7f9_1e809a938d71.slice - libcontainer container kubepods-besteffort-pod7ce87a9a_4a9f_4e2a_b7f9_1e809a938d71.slice. Oct 30 00:05:03.035759 systemd[1]: Created slice kubepods-besteffort-podfbe97583_c6f6_4157_9d17_86a8d42b9d6d.slice - libcontainer container kubepods-besteffort-podfbe97583_c6f6_4157_9d17_86a8d42b9d6d.slice. Oct 30 00:05:03.049524 kubelet[2689]: I1030 00:05:03.049469 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d930ac2e-f4f2-4b3f-a87d-015fa72b1a3c-tigera-ca-bundle\") pod \"calico-kube-controllers-7c6b9bd746-st5j9\" (UID: \"d930ac2e-f4f2-4b3f-a87d-015fa72b1a3c\") " pod="calico-system/calico-kube-controllers-7c6b9bd746-st5j9" Oct 30 00:05:03.049744 kubelet[2689]: I1030 00:05:03.049536 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbe97583-c6f6-4157-9d17-86a8d42b9d6d-whisker-ca-bundle\") pod \"whisker-659d8dc9f6-9x94j\" (UID: \"fbe97583-c6f6-4157-9d17-86a8d42b9d6d\") " pod="calico-system/whisker-659d8dc9f6-9x94j" Oct 30 00:05:03.049744 kubelet[2689]: I1030 00:05:03.049570 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ce87a9a-4a9f-4e2a-b7f9-1e809a938d71-config\") pod \"goldmane-666569f655-wtvtg\" (UID: \"7ce87a9a-4a9f-4e2a-b7f9-1e809a938d71\") " pod="calico-system/goldmane-666569f655-wtvtg" Oct 30 00:05:03.049744 kubelet[2689]: I1030 00:05:03.049604 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk467\" (UniqueName: \"kubernetes.io/projected/7402bacb-6343-422b-b5ae-563901a1a2d5-kube-api-access-gk467\") pod \"coredns-668d6bf9bc-glxrs\" (UID: \"7402bacb-6343-422b-b5ae-563901a1a2d5\") " pod="kube-system/coredns-668d6bf9bc-glxrs" Oct 30 00:05:03.049744 kubelet[2689]: I1030 00:05:03.049634 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48np5\" (UniqueName: \"kubernetes.io/projected/7fdc9fa2-26e8-4238-9f3b-2e6c25ad7e62-kube-api-access-48np5\") pod \"calico-apiserver-7668ff9dd9-98c6b\" (UID: \"7fdc9fa2-26e8-4238-9f3b-2e6c25ad7e62\") " pod="calico-apiserver/calico-apiserver-7668ff9dd9-98c6b" Oct 30 00:05:03.049744 kubelet[2689]: I1030 00:05:03.049662 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-577v5\" (UniqueName: \"kubernetes.io/projected/fbe97583-c6f6-4157-9d17-86a8d42b9d6d-kube-api-access-577v5\") pod \"whisker-659d8dc9f6-9x94j\" (UID: \"fbe97583-c6f6-4157-9d17-86a8d42b9d6d\") " pod="calico-system/whisker-659d8dc9f6-9x94j" Oct 30 00:05:03.049966 kubelet[2689]: I1030 00:05:03.049703 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7402bacb-6343-422b-b5ae-563901a1a2d5-config-volume\") pod \"coredns-668d6bf9bc-glxrs\" (UID: \"7402bacb-6343-422b-b5ae-563901a1a2d5\") " pod="kube-system/coredns-668d6bf9bc-glxrs" Oct 30 00:05:03.049966 kubelet[2689]: I1030 00:05:03.049734 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqvlp\" (UniqueName: \"kubernetes.io/projected/041ed311-1a2e-462d-ace8-65f00add4557-kube-api-access-tqvlp\") pod \"calico-apiserver-7668ff9dd9-jn9tg\" (UID: \"041ed311-1a2e-462d-ace8-65f00add4557\") " pod="calico-apiserver/calico-apiserver-7668ff9dd9-jn9tg" Oct 30 00:05:03.049966 kubelet[2689]: I1030 00:05:03.049764 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fbe97583-c6f6-4157-9d17-86a8d42b9d6d-whisker-backend-key-pair\") pod \"whisker-659d8dc9f6-9x94j\" (UID: \"fbe97583-c6f6-4157-9d17-86a8d42b9d6d\") " pod="calico-system/whisker-659d8dc9f6-9x94j" Oct 30 00:05:03.049966 kubelet[2689]: I1030 00:05:03.049813 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7fdc9fa2-26e8-4238-9f3b-2e6c25ad7e62-calico-apiserver-certs\") pod \"calico-apiserver-7668ff9dd9-98c6b\" (UID: \"7fdc9fa2-26e8-4238-9f3b-2e6c25ad7e62\") " pod="calico-apiserver/calico-apiserver-7668ff9dd9-98c6b" Oct 30 00:05:03.049966 kubelet[2689]: I1030 00:05:03.049848 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxlc8\" (UniqueName: \"kubernetes.io/projected/7ce87a9a-4a9f-4e2a-b7f9-1e809a938d71-kube-api-access-cxlc8\") pod \"goldmane-666569f655-wtvtg\" (UID: \"7ce87a9a-4a9f-4e2a-b7f9-1e809a938d71\") " pod="calico-system/goldmane-666569f655-wtvtg" Oct 30 00:05:03.054385 kubelet[2689]: I1030 00:05:03.049883 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/041ed311-1a2e-462d-ace8-65f00add4557-calico-apiserver-certs\") pod \"calico-apiserver-7668ff9dd9-jn9tg\" (UID: \"041ed311-1a2e-462d-ace8-65f00add4557\") " pod="calico-apiserver/calico-apiserver-7668ff9dd9-jn9tg" Oct 30 00:05:03.054385 kubelet[2689]: I1030 00:05:03.049914 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0e62754-eb00-4b08-8cc7-2b7fa22525b9-config-volume\") pod \"coredns-668d6bf9bc-7lqr5\" (UID: \"d0e62754-eb00-4b08-8cc7-2b7fa22525b9\") " pod="kube-system/coredns-668d6bf9bc-7lqr5" Oct 30 00:05:03.054385 kubelet[2689]: I1030 00:05:03.049946 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5hqm\" (UniqueName: \"kubernetes.io/projected/d0e62754-eb00-4b08-8cc7-2b7fa22525b9-kube-api-access-x5hqm\") pod \"coredns-668d6bf9bc-7lqr5\" (UID: \"d0e62754-eb00-4b08-8cc7-2b7fa22525b9\") " pod="kube-system/coredns-668d6bf9bc-7lqr5" Oct 30 00:05:03.054385 kubelet[2689]: I1030 00:05:03.049976 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ce87a9a-4a9f-4e2a-b7f9-1e809a938d71-goldmane-ca-bundle\") pod \"goldmane-666569f655-wtvtg\" (UID: \"7ce87a9a-4a9f-4e2a-b7f9-1e809a938d71\") " pod="calico-system/goldmane-666569f655-wtvtg" Oct 30 00:05:03.054385 kubelet[2689]: I1030 00:05:03.050011 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwjpd\" (UniqueName: \"kubernetes.io/projected/d930ac2e-f4f2-4b3f-a87d-015fa72b1a3c-kube-api-access-pwjpd\") pod \"calico-kube-controllers-7c6b9bd746-st5j9\" (UID: \"d930ac2e-f4f2-4b3f-a87d-015fa72b1a3c\") " pod="calico-system/calico-kube-controllers-7c6b9bd746-st5j9" Oct 30 00:05:03.051719 systemd[1]: Created slice kubepods-burstable-podd0e62754_eb00_4b08_8cc7_2b7fa22525b9.slice - libcontainer container kubepods-burstable-podd0e62754_eb00_4b08_8cc7_2b7fa22525b9.slice. Oct 30 00:05:03.054979 kubelet[2689]: I1030 00:05:03.050286 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7ce87a9a-4a9f-4e2a-b7f9-1e809a938d71-goldmane-key-pair\") pod \"goldmane-666569f655-wtvtg\" (UID: \"7ce87a9a-4a9f-4e2a-b7f9-1e809a938d71\") " pod="calico-system/goldmane-666569f655-wtvtg" Oct 30 00:05:03.069930 systemd[1]: Created slice kubepods-besteffort-pod7fdc9fa2_26e8_4238_9f3b_2e6c25ad7e62.slice - libcontainer container kubepods-besteffort-pod7fdc9fa2_26e8_4238_9f3b_2e6c25ad7e62.slice. Oct 30 00:05:03.083494 systemd[1]: Created slice kubepods-besteffort-podd930ac2e_f4f2_4b3f_a87d_015fa72b1a3c.slice - libcontainer container kubepods-besteffort-podd930ac2e_f4f2_4b3f_a87d_015fa72b1a3c.slice. Oct 30 00:05:03.094959 systemd[1]: Created slice kubepods-burstable-pod7402bacb_6343_422b_b5ae_563901a1a2d5.slice - libcontainer container kubepods-burstable-pod7402bacb_6343_422b_b5ae_563901a1a2d5.slice. Oct 30 00:05:03.295934 kubelet[2689]: E1030 00:05:03.295884 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:05:03.302539 containerd[1510]: time="2025-10-30T00:05:03.302417649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 30 00:05:03.328137 containerd[1510]: time="2025-10-30T00:05:03.326854335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wtvtg,Uid:7ce87a9a-4a9f-4e2a-b7f9-1e809a938d71,Namespace:calico-system,Attempt:0,}" Oct 30 00:05:03.346613 containerd[1510]: time="2025-10-30T00:05:03.345449117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-659d8dc9f6-9x94j,Uid:fbe97583-c6f6-4157-9d17-86a8d42b9d6d,Namespace:calico-system,Attempt:0,}" Oct 30 00:05:03.365738 kubelet[2689]: E1030 00:05:03.365668 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:05:03.397137 containerd[1510]: time="2025-10-30T00:05:03.396801375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c6b9bd746-st5j9,Uid:d930ac2e-f4f2-4b3f-a87d-015fa72b1a3c,Namespace:calico-system,Attempt:0,}" Oct 30 00:05:03.399644 containerd[1510]: time="2025-10-30T00:05:03.399593801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lqr5,Uid:d0e62754-eb00-4b08-8cc7-2b7fa22525b9,Namespace:kube-system,Attempt:0,}" Oct 30 00:05:03.400409 containerd[1510]: time="2025-10-30T00:05:03.400368801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7668ff9dd9-98c6b,Uid:7fdc9fa2-26e8-4238-9f3b-2e6c25ad7e62,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:05:03.405679 kubelet[2689]: E1030 00:05:03.405620 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:05:03.426384 containerd[1510]: time="2025-10-30T00:05:03.426179935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-glxrs,Uid:7402bacb-6343-422b-b5ae-563901a1a2d5,Namespace:kube-system,Attempt:0,}" Oct 30 00:05:03.586759 containerd[1510]: time="2025-10-30T00:05:03.586693286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7668ff9dd9-jn9tg,Uid:041ed311-1a2e-462d-ace8-65f00add4557,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:05:03.683179 containerd[1510]: time="2025-10-30T00:05:03.682565383Z" level=error msg="Failed to destroy network for sandbox \"4758f5e4f561df5e6ef220267a930c2b1c87a55f5971615480da74e98b04bde2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:05:03.698886 containerd[1510]: time="2025-10-30T00:05:03.698213652Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wtvtg,Uid:7ce87a9a-4a9f-4e2a-b7f9-1e809a938d71,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4758f5e4f561df5e6ef220267a930c2b1c87a55f5971615480da74e98b04bde2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:05:03.745367 kubelet[2689]: E1030 00:05:03.744983 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4758f5e4f561df5e6ef220267a930c2b1c87a55f5971615480da74e98b04bde2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:05:03.745367 kubelet[2689]: E1030 00:05:03.745276 2689 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4758f5e4f561df5e6ef220267a930c2b1c87a55f5971615480da74e98b04bde2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-wtvtg" Oct 30 00:05:03.745367 kubelet[2689]: E1030 00:05:03.745305 2689 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4758f5e4f561df5e6ef220267a930c2b1c87a55f5971615480da74e98b04bde2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-wtvtg" Oct 30 00:05:03.745832 kubelet[2689]: E1030 00:05:03.745644 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-wtvtg_calico-system(7ce87a9a-4a9f-4e2a-b7f9-1e809a938d71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-wtvtg_calico-system(7ce87a9a-4a9f-4e2a-b7f9-1e809a938d71)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4758f5e4f561df5e6ef220267a930c2b1c87a55f5971615480da74e98b04bde2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-wtvtg" podUID="7ce87a9a-4a9f-4e2a-b7f9-1e809a938d71" Oct 30 00:05:03.747790 containerd[1510]: time="2025-10-30T00:05:03.744892073Z" level=error msg="Failed to destroy network for sandbox \"0af507e3e7e54e125270a9c6f376f084d32db6bdd12a42578f6f11edbe224022\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:05:03.748941 containerd[1510]: time="2025-10-30T00:05:03.748900058Z" level=error msg="Failed to destroy network for sandbox \"4985f468fa09c3cca6edfedd7344aea4ae61142eedb5cf73ce1b188eb8ddedd7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:05:03.753498 containerd[1510]: time="2025-10-30T00:05:03.753383804Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-glxrs,Uid:7402bacb-6343-422b-b5ae-563901a1a2d5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0af507e3e7e54e125270a9c6f376f084d32db6bdd12a42578f6f11edbe224022\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:05:03.754272 kubelet[2689]: E1030 00:05:03.754192 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0af507e3e7e54e125270a9c6f376f084d32db6bdd12a42578f6f11edbe224022\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:05:03.755501 kubelet[2689]: E1030 00:05:03.754284 2689 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0af507e3e7e54e125270a9c6f376f084d32db6bdd12a42578f6f11edbe224022\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-glxrs" Oct 30 00:05:03.755501 kubelet[2689]: E1030 00:05:03.754321 2689 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0af507e3e7e54e125270a9c6f376f084d32db6bdd12a42578f6f11edbe224022\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-glxrs" Oct 30 00:05:03.755501 kubelet[2689]: E1030 00:05:03.754390 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-glxrs_kube-system(7402bacb-6343-422b-b5ae-563901a1a2d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-glxrs_kube-system(7402bacb-6343-422b-b5ae-563901a1a2d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0af507e3e7e54e125270a9c6f376f084d32db6bdd12a42578f6f11edbe224022\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-glxrs" podUID="7402bacb-6343-422b-b5ae-563901a1a2d5" Oct 30 00:05:03.755715 containerd[1510]: time="2025-10-30T00:05:03.754727209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c6b9bd746-st5j9,Uid:d930ac2e-f4f2-4b3f-a87d-015fa72b1a3c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4985f468fa09c3cca6edfedd7344aea4ae61142eedb5cf73ce1b188eb8ddedd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:05:03.756747 kubelet[2689]: E1030 00:05:03.756206 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4985f468fa09c3cca6edfedd7344aea4ae61142eedb5cf73ce1b188eb8ddedd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:05:03.756747 kubelet[2689]: E1030 00:05:03.756296 2689 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4985f468fa09c3cca6edfedd7344aea4ae61142eedb5cf73ce1b188eb8ddedd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c6b9bd746-st5j9" Oct 30 00:05:03.756747 kubelet[2689]: E1030 00:05:03.756337 2689 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4985f468fa09c3cca6edfedd7344aea4ae61142eedb5cf73ce1b188eb8ddedd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c6b9bd746-st5j9" Oct 30 00:05:03.757755 kubelet[2689]: E1030 00:05:03.756454 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c6b9bd746-st5j9_calico-system(d930ac2e-f4f2-4b3f-a87d-015fa72b1a3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c6b9bd746-st5j9_calico-system(d930ac2e-f4f2-4b3f-a87d-015fa72b1a3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4985f468fa09c3cca6edfedd7344aea4ae61142eedb5cf73ce1b188eb8ddedd7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c6b9bd746-st5j9" podUID="d930ac2e-f4f2-4b3f-a87d-015fa72b1a3c" Oct 30 00:05:03.768081 containerd[1510]: time="2025-10-30T00:05:03.766897215Z" level=error msg="Failed to destroy network for sandbox \"7d0b62ed5c3359e2c8625a6e9e502fc43c99f71049fa180b8c30d656125802e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:05:03.770682 containerd[1510]: time="2025-10-30T00:05:03.770352567Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7668ff9dd9-98c6b,Uid:7fdc9fa2-26e8-4238-9f3b-2e6c25ad7e62,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d0b62ed5c3359e2c8625a6e9e502fc43c99f71049fa180b8c30d656125802e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:05:03.771511 kubelet[2689]: E1030 00:05:03.771229 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d0b62ed5c3359e2c8625a6e9e502fc43c99f71049fa180b8c30d656125802e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:05:03.771511 kubelet[2689]: E1030 00:05:03.771295 2689 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d0b62ed5c3359e2c8625a6e9e502fc43c99f71049fa180b8c30d656125802e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7668ff9dd9-98c6b" Oct 30 00:05:03.771511 kubelet[2689]: E1030 00:05:03.771322 2689 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d0b62ed5c3359e2c8625a6e9e502fc43c99f71049fa180b8c30d656125802e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7668ff9dd9-98c6b" Oct 30 00:05:03.771729 kubelet[2689]: E1030 00:05:03.771366 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7668ff9dd9-98c6b_calico-apiserver(7fdc9fa2-26e8-4238-9f3b-2e6c25ad7e62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7668ff9dd9-98c6b_calico-apiserver(7fdc9fa2-26e8-4238-9f3b-2e6c25ad7e62)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d0b62ed5c3359e2c8625a6e9e502fc43c99f71049fa180b8c30d656125802e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7668ff9dd9-98c6b" podUID="7fdc9fa2-26e8-4238-9f3b-2e6c25ad7e62" Oct 30 00:05:03.793314 containerd[1510]: time="2025-10-30T00:05:03.793200713Z" level=error msg="Failed to destroy network for sandbox \"1777165152a1022cdfed12dd2a4f8d1c672dd0573bc95f38396628acfb0584e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:05:03.798521 containerd[1510]: time="2025-10-30T00:05:03.798345520Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-659d8dc9f6-9x94j,Uid:fbe97583-c6f6-4157-9d17-86a8d42b9d6d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1777165152a1022cdfed12dd2a4f8d1c672dd0573bc95f38396628acfb0584e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:05:03.799126 kubelet[2689]: E1030 00:05:03.799029 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1777165152a1022cdfed12dd2a4f8d1c672dd0573bc95f38396628acfb0584e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:05:03.800252 kubelet[2689]: E1030 00:05:03.800159 2689 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1777165152a1022cdfed12dd2a4f8d1c672dd0573bc95f38396628acfb0584e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-659d8dc9f6-9x94j" Oct 30 00:05:03.800252 kubelet[2689]: E1030 00:05:03.800213 2689 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1777165152a1022cdfed12dd2a4f8d1c672dd0573bc95f38396628acfb0584e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-659d8dc9f6-9x94j" Oct 30 00:05:03.800391 kubelet[2689]: E1030 00:05:03.800287 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-659d8dc9f6-9x94j_calico-system(fbe97583-c6f6-4157-9d17-86a8d42b9d6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-659d8dc9f6-9x94j_calico-system(fbe97583-c6f6-4157-9d17-86a8d42b9d6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1777165152a1022cdfed12dd2a4f8d1c672dd0573bc95f38396628acfb0584e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-659d8dc9f6-9x94j" podUID="fbe97583-c6f6-4157-9d17-86a8d42b9d6d" Oct 30 00:05:03.806843 containerd[1510]: time="2025-10-30T00:05:03.806778884Z" level=error msg="Failed to destroy network for sandbox \"f18a1471b14fdad24bf786ba68bd69f55e1dc09c8aad47c7656193012916f3a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:05:03.810519 containerd[1510]: time="2025-10-30T00:05:03.810459245Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lqr5,Uid:d0e62754-eb00-4b08-8cc7-2b7fa22525b9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f18a1471b14fdad24bf786ba68bd69f55e1dc09c8aad47c7656193012916f3a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:05:03.811017 kubelet[2689]: E1030 00:05:03.810881 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f18a1471b14fdad24bf786ba68bd69f55e1dc09c8aad47c7656193012916f3a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:05:03.811212 kubelet[2689]: E1030 00:05:03.810984 2689 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f18a1471b14fdad24bf786ba68bd69f55e1dc09c8aad47c7656193012916f3a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7lqr5" Oct 30 00:05:03.811387 kubelet[2689]: E1030 00:05:03.811278 2689 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f18a1471b14fdad24bf786ba68bd69f55e1dc09c8aad47c7656193012916f3a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7lqr5" Oct 30 00:05:03.811546 kubelet[2689]: E1030 00:05:03.811490 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7lqr5_kube-system(d0e62754-eb00-4b08-8cc7-2b7fa22525b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7lqr5_kube-system(d0e62754-eb00-4b08-8cc7-2b7fa22525b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f18a1471b14fdad24bf786ba68bd69f55e1dc09c8aad47c7656193012916f3a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7lqr5" podUID="d0e62754-eb00-4b08-8cc7-2b7fa22525b9" Oct 30 00:05:03.832421 containerd[1510]: time="2025-10-30T00:05:03.832176493Z" level=error msg="Failed to destroy network for sandbox \"fdf49be43f2e94d670c9507ea750d8c506a1a12b9c619f3243af1f77a9633876\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:05:03.834523 containerd[1510]: time="2025-10-30T00:05:03.834369381Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7668ff9dd9-jn9tg,Uid:041ed311-1a2e-462d-ace8-65f00add4557,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdf49be43f2e94d670c9507ea750d8c506a1a12b9c619f3243af1f77a9633876\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:05:03.835537 kubelet[2689]: E1030 00:05:03.835324 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdf49be43f2e94d670c9507ea750d8c506a1a12b9c619f3243af1f77a9633876\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:05:03.835537 kubelet[2689]: E1030 00:05:03.835412 2689 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdf49be43f2e94d670c9507ea750d8c506a1a12b9c619f3243af1f77a9633876\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7668ff9dd9-jn9tg" Oct 30 00:05:03.835722 kubelet[2689]: E1030 00:05:03.835456 2689 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdf49be43f2e94d670c9507ea750d8c506a1a12b9c619f3243af1f77a9633876\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7668ff9dd9-jn9tg" Oct 30 00:05:03.835766 kubelet[2689]: E1030 00:05:03.835714 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7668ff9dd9-jn9tg_calico-apiserver(041ed311-1a2e-462d-ace8-65f00add4557)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7668ff9dd9-jn9tg_calico-apiserver(041ed311-1a2e-462d-ace8-65f00add4557)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fdf49be43f2e94d670c9507ea750d8c506a1a12b9c619f3243af1f77a9633876\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7668ff9dd9-jn9tg" podUID="041ed311-1a2e-462d-ace8-65f00add4557" Oct 30 00:05:04.101706 systemd[1]: Created slice kubepods-besteffort-pod06390243_fcd9_4c68_9f88_5b23f795b967.slice - libcontainer container kubepods-besteffort-pod06390243_fcd9_4c68_9f88_5b23f795b967.slice. Oct 30 00:05:04.106003 containerd[1510]: time="2025-10-30T00:05:04.105875047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7vb2j,Uid:06390243-fcd9-4c68-9f88-5b23f795b967,Namespace:calico-system,Attempt:0,}" Oct 30 00:05:04.185832 containerd[1510]: time="2025-10-30T00:05:04.185739893Z" level=error msg="Failed to destroy network for sandbox \"f077f0ea4bc624ff708412087141762df8733e94ef8d62ff46911d1f651cf12f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:05:04.190453 systemd[1]: run-netns-cni\x2d06d2308a\x2d79f9\x2dd624\x2d3079\x2d2fc6ede35716.mount: Deactivated successfully. Oct 30 00:05:04.191296 containerd[1510]: time="2025-10-30T00:05:04.191127357Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7vb2j,Uid:06390243-fcd9-4c68-9f88-5b23f795b967,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f077f0ea4bc624ff708412087141762df8733e94ef8d62ff46911d1f651cf12f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:05:04.191914 kubelet[2689]: E1030 00:05:04.191761 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f077f0ea4bc624ff708412087141762df8733e94ef8d62ff46911d1f651cf12f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:05:04.192208 kubelet[2689]: E1030 00:05:04.192165 2689 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f077f0ea4bc624ff708412087141762df8733e94ef8d62ff46911d1f651cf12f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7vb2j" Oct 30 00:05:04.192364 kubelet[2689]: E1030 00:05:04.192311 2689 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f077f0ea4bc624ff708412087141762df8733e94ef8d62ff46911d1f651cf12f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7vb2j" Oct 30 00:05:04.192606 kubelet[2689]: E1030 00:05:04.192495 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7vb2j_calico-system(06390243-fcd9-4c68-9f88-5b23f795b967)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7vb2j_calico-system(06390243-fcd9-4c68-9f88-5b23f795b967)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f077f0ea4bc624ff708412087141762df8733e94ef8d62ff46911d1f651cf12f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7vb2j" podUID="06390243-fcd9-4c68-9f88-5b23f795b967" Oct 30 00:05:08.692458 kubelet[2689]: I1030 00:05:08.691740 2689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 30 00:05:08.694202 kubelet[2689]: E1030 00:05:08.693682 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:05:09.312329 kubelet[2689]: E1030 00:05:09.312289 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:05:12.513695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1951171075.mount: Deactivated successfully. Oct 30 00:05:12.538242 containerd[1510]: time="2025-10-30T00:05:12.538034877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:05:12.541215 containerd[1510]: time="2025-10-30T00:05:12.541167727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 30 00:05:12.541487 containerd[1510]: time="2025-10-30T00:05:12.541441735Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:05:12.543996 containerd[1510]: time="2025-10-30T00:05:12.543948150Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:05:12.544696 containerd[1510]: time="2025-10-30T00:05:12.544662542Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 9.24217921s" Oct 30 00:05:12.544812 containerd[1510]: time="2025-10-30T00:05:12.544799291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 30 00:05:12.564821 containerd[1510]: time="2025-10-30T00:05:12.564767519Z" level=info msg="CreateContainer within sandbox \"7663d2c47fb6cc44fc90f5f3c28f41d5ad293700460f66312eadf7b8353c3a2e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 30 00:05:12.590694 containerd[1510]: time="2025-10-30T00:05:12.590166200Z" level=info msg="Container 17071a5081fb294960f3e2df3891f176f5a1638eaf47c0f1804b1b5326b86aed: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:05:12.600899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3488399911.mount: Deactivated successfully. Oct 30 00:05:12.607855 containerd[1510]: time="2025-10-30T00:05:12.607802301Z" level=info msg="CreateContainer within sandbox \"7663d2c47fb6cc44fc90f5f3c28f41d5ad293700460f66312eadf7b8353c3a2e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"17071a5081fb294960f3e2df3891f176f5a1638eaf47c0f1804b1b5326b86aed\"" Oct 30 00:05:12.608885 containerd[1510]: time="2025-10-30T00:05:12.608853760Z" level=info msg="StartContainer for \"17071a5081fb294960f3e2df3891f176f5a1638eaf47c0f1804b1b5326b86aed\"" Oct 30 00:05:12.610612 containerd[1510]: time="2025-10-30T00:05:12.610578408Z" level=info msg="connecting to shim 17071a5081fb294960f3e2df3891f176f5a1638eaf47c0f1804b1b5326b86aed" address="unix:///run/containerd/s/cb38b73afd07fef9ca128ab43f8b80fd7a319841c00e27afcfcddfb5e07e3cf0" protocol=ttrpc version=3 Oct 30 00:05:12.673403 systemd[1]: Started cri-containerd-17071a5081fb294960f3e2df3891f176f5a1638eaf47c0f1804b1b5326b86aed.scope - libcontainer container 17071a5081fb294960f3e2df3891f176f5a1638eaf47c0f1804b1b5326b86aed. Oct 30 00:05:12.757323 containerd[1510]: time="2025-10-30T00:05:12.757053325Z" level=info msg="StartContainer for \"17071a5081fb294960f3e2df3891f176f5a1638eaf47c0f1804b1b5326b86aed\" returns successfully" Oct 30 00:05:13.010091 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 30 00:05:13.010272 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 30 00:05:13.329544 kubelet[2689]: I1030 00:05:13.329300 2689 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fbe97583-c6f6-4157-9d17-86a8d42b9d6d-whisker-backend-key-pair\") pod \"fbe97583-c6f6-4157-9d17-86a8d42b9d6d\" (UID: \"fbe97583-c6f6-4157-9d17-86a8d42b9d6d\") " Oct 30 00:05:13.329544 kubelet[2689]: I1030 00:05:13.329344 2689 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-577v5\" (UniqueName: \"kubernetes.io/projected/fbe97583-c6f6-4157-9d17-86a8d42b9d6d-kube-api-access-577v5\") pod \"fbe97583-c6f6-4157-9d17-86a8d42b9d6d\" (UID: \"fbe97583-c6f6-4157-9d17-86a8d42b9d6d\") " Oct 30 00:05:13.329544 kubelet[2689]: I1030 00:05:13.329391 2689 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbe97583-c6f6-4157-9d17-86a8d42b9d6d-whisker-ca-bundle\") pod \"fbe97583-c6f6-4157-9d17-86a8d42b9d6d\" (UID: \"fbe97583-c6f6-4157-9d17-86a8d42b9d6d\") " Oct 30 00:05:13.332251 kubelet[2689]: I1030 00:05:13.329950 2689 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbe97583-c6f6-4157-9d17-86a8d42b9d6d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "fbe97583-c6f6-4157-9d17-86a8d42b9d6d" (UID: "fbe97583-c6f6-4157-9d17-86a8d42b9d6d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 30 00:05:13.335010 kubelet[2689]: E1030 00:05:13.334975 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:05:13.342073 kubelet[2689]: I1030 00:05:13.342020 2689 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbe97583-c6f6-4157-9d17-86a8d42b9d6d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "fbe97583-c6f6-4157-9d17-86a8d42b9d6d" (UID: "fbe97583-c6f6-4157-9d17-86a8d42b9d6d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 30 00:05:13.349495 kubelet[2689]: I1030 00:05:13.349417 2689 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbe97583-c6f6-4157-9d17-86a8d42b9d6d-kube-api-access-577v5" (OuterVolumeSpecName: "kube-api-access-577v5") pod "fbe97583-c6f6-4157-9d17-86a8d42b9d6d" (UID: "fbe97583-c6f6-4157-9d17-86a8d42b9d6d"). InnerVolumeSpecName "kube-api-access-577v5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 30 00:05:13.430655 kubelet[2689]: I1030 00:05:13.429934 2689 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbe97583-c6f6-4157-9d17-86a8d42b9d6d-whisker-ca-bundle\") on node \"ci-4459.1.0-n-959986c1c8\" DevicePath \"\"" Oct 30 00:05:13.430655 kubelet[2689]: I1030 00:05:13.429976 2689 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-577v5\" (UniqueName: \"kubernetes.io/projected/fbe97583-c6f6-4157-9d17-86a8d42b9d6d-kube-api-access-577v5\") on node \"ci-4459.1.0-n-959986c1c8\" DevicePath \"\"" Oct 30 00:05:13.430655 kubelet[2689]: I1030 00:05:13.429986 2689 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fbe97583-c6f6-4157-9d17-86a8d42b9d6d-whisker-backend-key-pair\") on node \"ci-4459.1.0-n-959986c1c8\" DevicePath \"\"" Oct 30 00:05:13.512930 systemd[1]: var-lib-kubelet-pods-fbe97583\x2dc6f6\x2d4157\x2d9d17\x2d86a8d42b9d6d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d577v5.mount: Deactivated successfully. Oct 30 00:05:13.513089 systemd[1]: var-lib-kubelet-pods-fbe97583\x2dc6f6\x2d4157\x2d9d17\x2d86a8d42b9d6d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 30 00:05:13.589202 containerd[1510]: time="2025-10-30T00:05:13.588521842Z" level=info msg="TaskExit event in podsandbox handler container_id:\"17071a5081fb294960f3e2df3891f176f5a1638eaf47c0f1804b1b5326b86aed\" id:\"90b2f3a2d418b9eb352c3809e03d0315f243f45c1fae1ae9720b96c151f309a0\" pid:3758 exit_status:1 exited_at:{seconds:1761782713 nanos:587719234}" Oct 30 00:05:13.645603 systemd[1]: Removed slice kubepods-besteffort-podfbe97583_c6f6_4157_9d17_86a8d42b9d6d.slice - libcontainer container kubepods-besteffort-podfbe97583_c6f6_4157_9d17_86a8d42b9d6d.slice. Oct 30 00:05:13.665137 kubelet[2689]: I1030 00:05:13.663750 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7f4v7" podStartSLOduration=2.199593412 podStartE2EDuration="21.663717502s" podCreationTimestamp="2025-10-30 00:04:52 +0000 UTC" firstStartedPulling="2025-10-30 00:04:53.081750411 +0000 UTC m=+22.157989623" lastFinishedPulling="2025-10-30 00:05:12.545874488 +0000 UTC m=+41.622113713" observedRunningTime="2025-10-30 00:05:13.387334463 +0000 UTC m=+42.463573698" watchObservedRunningTime="2025-10-30 00:05:13.663717502 +0000 UTC m=+42.739956732" Oct 30 00:05:13.733732 systemd[1]: Created slice kubepods-besteffort-pod05f3fe96_a4e2_497a_aa78_f94004b3a92a.slice - libcontainer container kubepods-besteffort-pod05f3fe96_a4e2_497a_aa78_f94004b3a92a.slice. Oct 30 00:05:13.834125 kubelet[2689]: I1030 00:05:13.834022 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05f3fe96-a4e2-497a-aa78-f94004b3a92a-whisker-ca-bundle\") pod \"whisker-548d886cd6-g6b4q\" (UID: \"05f3fe96-a4e2-497a-aa78-f94004b3a92a\") " pod="calico-system/whisker-548d886cd6-g6b4q" Oct 30 00:05:13.834298 kubelet[2689]: I1030 00:05:13.834155 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grsrd\" (UniqueName: \"kubernetes.io/projected/05f3fe96-a4e2-497a-aa78-f94004b3a92a-kube-api-access-grsrd\") pod \"whisker-548d886cd6-g6b4q\" (UID: \"05f3fe96-a4e2-497a-aa78-f94004b3a92a\") " pod="calico-system/whisker-548d886cd6-g6b4q" Oct 30 00:05:13.834298 kubelet[2689]: I1030 00:05:13.834246 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/05f3fe96-a4e2-497a-aa78-f94004b3a92a-whisker-backend-key-pair\") pod \"whisker-548d886cd6-g6b4q\" (UID: \"05f3fe96-a4e2-497a-aa78-f94004b3a92a\") " pod="calico-system/whisker-548d886cd6-g6b4q" Oct 30 00:05:14.039910 containerd[1510]: time="2025-10-30T00:05:14.039762974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-548d886cd6-g6b4q,Uid:05f3fe96-a4e2-497a-aa78-f94004b3a92a,Namespace:calico-system,Attempt:0,}" Oct 30 00:05:14.096153 containerd[1510]: time="2025-10-30T00:05:14.095125144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7668ff9dd9-98c6b,Uid:7fdc9fa2-26e8-4238-9f3b-2e6c25ad7e62,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:05:14.338627 kubelet[2689]: E1030 00:05:14.338578 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:05:14.424803 systemd-networkd[1424]: cali029eae7c53d: Link UP Oct 30 00:05:14.426276 systemd-networkd[1424]: cali029eae7c53d: Gained carrier Oct 30 00:05:14.495493 containerd[1510]: 2025-10-30 00:05:14.115 [INFO][3783] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 00:05:14.495493 containerd[1510]: 2025-10-30 00:05:14.145 [INFO][3783] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--959986c1c8-k8s-whisker--548d886cd6--g6b4q-eth0 whisker-548d886cd6- calico-system 05f3fe96-a4e2-497a-aa78-f94004b3a92a 959 0 2025-10-30 00:05:13 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:548d886cd6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.1.0-n-959986c1c8 whisker-548d886cd6-g6b4q eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali029eae7c53d [] [] }} ContainerID="51e2ccdb46a37ea9fe41dd1fccf7877b55198cb6e31d1ce9d1cdba17b19565c5" Namespace="calico-system" Pod="whisker-548d886cd6-g6b4q" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-whisker--548d886cd6--g6b4q-" Oct 30 00:05:14.495493 containerd[1510]: 2025-10-30 00:05:14.146 [INFO][3783] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="51e2ccdb46a37ea9fe41dd1fccf7877b55198cb6e31d1ce9d1cdba17b19565c5" Namespace="calico-system" Pod="whisker-548d886cd6-g6b4q" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-whisker--548d886cd6--g6b4q-eth0" Oct 30 00:05:14.495493 containerd[1510]: 2025-10-30 00:05:14.307 [INFO][3806] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51e2ccdb46a37ea9fe41dd1fccf7877b55198cb6e31d1ce9d1cdba17b19565c5" HandleID="k8s-pod-network.51e2ccdb46a37ea9fe41dd1fccf7877b55198cb6e31d1ce9d1cdba17b19565c5" Workload="ci--4459.1.0--n--959986c1c8-k8s-whisker--548d886cd6--g6b4q-eth0" Oct 30 00:05:14.495950 containerd[1510]: 2025-10-30 00:05:14.308 [INFO][3806] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="51e2ccdb46a37ea9fe41dd1fccf7877b55198cb6e31d1ce9d1cdba17b19565c5" HandleID="k8s-pod-network.51e2ccdb46a37ea9fe41dd1fccf7877b55198cb6e31d1ce9d1cdba17b19565c5" Workload="ci--4459.1.0--n--959986c1c8-k8s-whisker--548d886cd6--g6b4q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032dc10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-n-959986c1c8", "pod":"whisker-548d886cd6-g6b4q", "timestamp":"2025-10-30 00:05:14.307555727 +0000 UTC"}, Hostname:"ci-4459.1.0-n-959986c1c8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:05:14.495950 containerd[1510]: 2025-10-30 00:05:14.308 [INFO][3806] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:05:14.495950 containerd[1510]: 2025-10-30 00:05:14.308 [INFO][3806] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:05:14.495950 containerd[1510]: 2025-10-30 00:05:14.308 [INFO][3806] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-959986c1c8' Oct 30 00:05:14.495950 containerd[1510]: 2025-10-30 00:05:14.327 [INFO][3806] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.51e2ccdb46a37ea9fe41dd1fccf7877b55198cb6e31d1ce9d1cdba17b19565c5" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:14.495950 containerd[1510]: 2025-10-30 00:05:14.340 [INFO][3806] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:14.495950 containerd[1510]: 2025-10-30 00:05:14.351 [INFO][3806] ipam/ipam.go 511: Trying affinity for 192.168.118.128/26 host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:14.495950 containerd[1510]: 2025-10-30 00:05:14.356 [INFO][3806] ipam/ipam.go 158: Attempting to load block cidr=192.168.118.128/26 host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:14.495950 containerd[1510]: 2025-10-30 00:05:14.360 [INFO][3806] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.118.128/26 host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:14.496558 containerd[1510]: 2025-10-30 00:05:14.360 [INFO][3806] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.118.128/26 handle="k8s-pod-network.51e2ccdb46a37ea9fe41dd1fccf7877b55198cb6e31d1ce9d1cdba17b19565c5" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:14.496558 containerd[1510]: 2025-10-30 00:05:14.364 [INFO][3806] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.51e2ccdb46a37ea9fe41dd1fccf7877b55198cb6e31d1ce9d1cdba17b19565c5 Oct 30 00:05:14.496558 containerd[1510]: 2025-10-30 00:05:14.373 [INFO][3806] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.118.128/26 handle="k8s-pod-network.51e2ccdb46a37ea9fe41dd1fccf7877b55198cb6e31d1ce9d1cdba17b19565c5" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:14.496558 containerd[1510]: 2025-10-30 00:05:14.390 [INFO][3806] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.118.129/26] block=192.168.118.128/26 handle="k8s-pod-network.51e2ccdb46a37ea9fe41dd1fccf7877b55198cb6e31d1ce9d1cdba17b19565c5" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:14.496558 containerd[1510]: 2025-10-30 00:05:14.390 [INFO][3806] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.118.129/26] handle="k8s-pod-network.51e2ccdb46a37ea9fe41dd1fccf7877b55198cb6e31d1ce9d1cdba17b19565c5" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:14.496558 containerd[1510]: 2025-10-30 00:05:14.390 [INFO][3806] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:05:14.496558 containerd[1510]: 2025-10-30 00:05:14.390 [INFO][3806] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.118.129/26] IPv6=[] ContainerID="51e2ccdb46a37ea9fe41dd1fccf7877b55198cb6e31d1ce9d1cdba17b19565c5" HandleID="k8s-pod-network.51e2ccdb46a37ea9fe41dd1fccf7877b55198cb6e31d1ce9d1cdba17b19565c5" Workload="ci--4459.1.0--n--959986c1c8-k8s-whisker--548d886cd6--g6b4q-eth0" Oct 30 00:05:14.496730 containerd[1510]: 2025-10-30 00:05:14.403 [INFO][3783] cni-plugin/k8s.go 418: Populated endpoint ContainerID="51e2ccdb46a37ea9fe41dd1fccf7877b55198cb6e31d1ce9d1cdba17b19565c5" Namespace="calico-system" Pod="whisker-548d886cd6-g6b4q" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-whisker--548d886cd6--g6b4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--959986c1c8-k8s-whisker--548d886cd6--g6b4q-eth0", GenerateName:"whisker-548d886cd6-", Namespace:"calico-system", SelfLink:"", UID:"05f3fe96-a4e2-497a-aa78-f94004b3a92a", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 5, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"548d886cd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-959986c1c8", ContainerID:"", Pod:"whisker-548d886cd6-g6b4q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.118.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali029eae7c53d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:05:14.496730 containerd[1510]: 2025-10-30 00:05:14.403 [INFO][3783] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.118.129/32] ContainerID="51e2ccdb46a37ea9fe41dd1fccf7877b55198cb6e31d1ce9d1cdba17b19565c5" Namespace="calico-system" Pod="whisker-548d886cd6-g6b4q" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-whisker--548d886cd6--g6b4q-eth0" Oct 30 00:05:14.496825 containerd[1510]: 2025-10-30 00:05:14.403 [INFO][3783] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali029eae7c53d ContainerID="51e2ccdb46a37ea9fe41dd1fccf7877b55198cb6e31d1ce9d1cdba17b19565c5" Namespace="calico-system" Pod="whisker-548d886cd6-g6b4q" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-whisker--548d886cd6--g6b4q-eth0" Oct 30 00:05:14.496825 containerd[1510]: 2025-10-30 00:05:14.426 [INFO][3783] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51e2ccdb46a37ea9fe41dd1fccf7877b55198cb6e31d1ce9d1cdba17b19565c5" Namespace="calico-system" Pod="whisker-548d886cd6-g6b4q" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-whisker--548d886cd6--g6b4q-eth0" Oct 30 00:05:14.496873 containerd[1510]: 2025-10-30 00:05:14.427 [INFO][3783] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="51e2ccdb46a37ea9fe41dd1fccf7877b55198cb6e31d1ce9d1cdba17b19565c5" Namespace="calico-system" Pod="whisker-548d886cd6-g6b4q" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-whisker--548d886cd6--g6b4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--959986c1c8-k8s-whisker--548d886cd6--g6b4q-eth0", GenerateName:"whisker-548d886cd6-", Namespace:"calico-system", SelfLink:"", UID:"05f3fe96-a4e2-497a-aa78-f94004b3a92a", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 5, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"548d886cd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-959986c1c8", ContainerID:"51e2ccdb46a37ea9fe41dd1fccf7877b55198cb6e31d1ce9d1cdba17b19565c5", Pod:"whisker-548d886cd6-g6b4q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.118.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali029eae7c53d", MAC:"da:4e:62:82:e9:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:05:14.496933 containerd[1510]: 2025-10-30 00:05:14.489 [INFO][3783] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="51e2ccdb46a37ea9fe41dd1fccf7877b55198cb6e31d1ce9d1cdba17b19565c5" Namespace="calico-system" Pod="whisker-548d886cd6-g6b4q" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-whisker--548d886cd6--g6b4q-eth0" Oct 30 00:05:14.627495 containerd[1510]: time="2025-10-30T00:05:14.626679181Z" level=info msg="TaskExit event in podsandbox handler container_id:\"17071a5081fb294960f3e2df3891f176f5a1638eaf47c0f1804b1b5326b86aed\" id:\"a486f3b6ddbbe4107ad9b8d5e67eb269d2a6c5cb3975c699a822a228acdda07e\" pid:3830 exit_status:1 exited_at:{seconds:1761782714 nanos:624941553}" Oct 30 00:05:14.641267 containerd[1510]: time="2025-10-30T00:05:14.641217974Z" level=info msg="connecting to shim 51e2ccdb46a37ea9fe41dd1fccf7877b55198cb6e31d1ce9d1cdba17b19565c5" address="unix:///run/containerd/s/106922abde395e656ae375dffb647f5faaa3f67db8466538576fb29eec696152" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:05:14.643666 systemd-networkd[1424]: calid50ab6d4230: Link UP Oct 30 00:05:14.651133 systemd-networkd[1424]: calid50ab6d4230: Gained carrier Oct 30 00:05:14.683260 containerd[1510]: 2025-10-30 00:05:14.141 [INFO][3791] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 00:05:14.683260 containerd[1510]: 2025-10-30 00:05:14.168 [INFO][3791] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--959986c1c8-k8s-calico--apiserver--7668ff9dd9--98c6b-eth0 calico-apiserver-7668ff9dd9- calico-apiserver 7fdc9fa2-26e8-4238-9f3b-2e6c25ad7e62 877 0 2025-10-30 00:04:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7668ff9dd9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.1.0-n-959986c1c8 calico-apiserver-7668ff9dd9-98c6b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid50ab6d4230 [] [] }} ContainerID="81a2a302e216fe22c21e5b2e766f00af3cd7abdcd95ea305959f8b5429775b5e" Namespace="calico-apiserver" Pod="calico-apiserver-7668ff9dd9-98c6b" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-calico--apiserver--7668ff9dd9--98c6b-" Oct 30 00:05:14.683260 containerd[1510]: 2025-10-30 00:05:14.168 [INFO][3791] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="81a2a302e216fe22c21e5b2e766f00af3cd7abdcd95ea305959f8b5429775b5e" Namespace="calico-apiserver" Pod="calico-apiserver-7668ff9dd9-98c6b" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-calico--apiserver--7668ff9dd9--98c6b-eth0" Oct 30 00:05:14.683260 containerd[1510]: 2025-10-30 00:05:14.307 [INFO][3808] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="81a2a302e216fe22c21e5b2e766f00af3cd7abdcd95ea305959f8b5429775b5e" HandleID="k8s-pod-network.81a2a302e216fe22c21e5b2e766f00af3cd7abdcd95ea305959f8b5429775b5e" Workload="ci--4459.1.0--n--959986c1c8-k8s-calico--apiserver--7668ff9dd9--98c6b-eth0" Oct 30 00:05:14.683515 containerd[1510]: 2025-10-30 00:05:14.308 [INFO][3808] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="81a2a302e216fe22c21e5b2e766f00af3cd7abdcd95ea305959f8b5429775b5e" HandleID="k8s-pod-network.81a2a302e216fe22c21e5b2e766f00af3cd7abdcd95ea305959f8b5429775b5e" Workload="ci--4459.1.0--n--959986c1c8-k8s-calico--apiserver--7668ff9dd9--98c6b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003102f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.1.0-n-959986c1c8", "pod":"calico-apiserver-7668ff9dd9-98c6b", "timestamp":"2025-10-30 00:05:14.307691756 +0000 UTC"}, Hostname:"ci-4459.1.0-n-959986c1c8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:05:14.683515 containerd[1510]: 2025-10-30 00:05:14.308 [INFO][3808] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:05:14.683515 containerd[1510]: 2025-10-30 00:05:14.391 [INFO][3808] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:05:14.683515 containerd[1510]: 2025-10-30 00:05:14.391 [INFO][3808] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-959986c1c8' Oct 30 00:05:14.683515 containerd[1510]: 2025-10-30 00:05:14.434 [INFO][3808] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.81a2a302e216fe22c21e5b2e766f00af3cd7abdcd95ea305959f8b5429775b5e" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:14.683515 containerd[1510]: 2025-10-30 00:05:14.483 [INFO][3808] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:14.683515 containerd[1510]: 2025-10-30 00:05:14.500 [INFO][3808] ipam/ipam.go 511: Trying affinity for 192.168.118.128/26 host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:14.683515 containerd[1510]: 2025-10-30 00:05:14.511 [INFO][3808] ipam/ipam.go 158: Attempting to load block cidr=192.168.118.128/26 host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:14.683515 containerd[1510]: 2025-10-30 00:05:14.520 [INFO][3808] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.118.128/26 host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:14.683763 containerd[1510]: 2025-10-30 00:05:14.521 [INFO][3808] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.118.128/26 handle="k8s-pod-network.81a2a302e216fe22c21e5b2e766f00af3cd7abdcd95ea305959f8b5429775b5e" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:14.683763 containerd[1510]: 2025-10-30 00:05:14.533 [INFO][3808] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.81a2a302e216fe22c21e5b2e766f00af3cd7abdcd95ea305959f8b5429775b5e Oct 30 00:05:14.683763 containerd[1510]: 2025-10-30 00:05:14.544 [INFO][3808] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.118.128/26 handle="k8s-pod-network.81a2a302e216fe22c21e5b2e766f00af3cd7abdcd95ea305959f8b5429775b5e" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:14.683763 containerd[1510]: 2025-10-30 00:05:14.613 [INFO][3808] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.118.130/26] block=192.168.118.128/26 handle="k8s-pod-network.81a2a302e216fe22c21e5b2e766f00af3cd7abdcd95ea305959f8b5429775b5e" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:14.683763 containerd[1510]: 2025-10-30 00:05:14.613 [INFO][3808] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.118.130/26] handle="k8s-pod-network.81a2a302e216fe22c21e5b2e766f00af3cd7abdcd95ea305959f8b5429775b5e" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:14.683763 containerd[1510]: 2025-10-30 00:05:14.613 [INFO][3808] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:05:14.683763 containerd[1510]: 2025-10-30 00:05:14.613 [INFO][3808] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.118.130/26] IPv6=[] ContainerID="81a2a302e216fe22c21e5b2e766f00af3cd7abdcd95ea305959f8b5429775b5e" HandleID="k8s-pod-network.81a2a302e216fe22c21e5b2e766f00af3cd7abdcd95ea305959f8b5429775b5e" Workload="ci--4459.1.0--n--959986c1c8-k8s-calico--apiserver--7668ff9dd9--98c6b-eth0" Oct 30 00:05:14.683908 containerd[1510]: 2025-10-30 00:05:14.633 [INFO][3791] cni-plugin/k8s.go 418: Populated endpoint ContainerID="81a2a302e216fe22c21e5b2e766f00af3cd7abdcd95ea305959f8b5429775b5e" Namespace="calico-apiserver" Pod="calico-apiserver-7668ff9dd9-98c6b" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-calico--apiserver--7668ff9dd9--98c6b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--959986c1c8-k8s-calico--apiserver--7668ff9dd9--98c6b-eth0", GenerateName:"calico-apiserver-7668ff9dd9-", Namespace:"calico-apiserver", SelfLink:"", UID:"7fdc9fa2-26e8-4238-9f3b-2e6c25ad7e62", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 4, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7668ff9dd9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-959986c1c8", ContainerID:"", Pod:"calico-apiserver-7668ff9dd9-98c6b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.118.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid50ab6d4230", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:05:14.683964 containerd[1510]: 2025-10-30 00:05:14.633 [INFO][3791] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.118.130/32] ContainerID="81a2a302e216fe22c21e5b2e766f00af3cd7abdcd95ea305959f8b5429775b5e" Namespace="calico-apiserver" Pod="calico-apiserver-7668ff9dd9-98c6b" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-calico--apiserver--7668ff9dd9--98c6b-eth0" Oct 30 00:05:14.683964 containerd[1510]: 2025-10-30 00:05:14.633 [INFO][3791] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid50ab6d4230 ContainerID="81a2a302e216fe22c21e5b2e766f00af3cd7abdcd95ea305959f8b5429775b5e" Namespace="calico-apiserver" Pod="calico-apiserver-7668ff9dd9-98c6b" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-calico--apiserver--7668ff9dd9--98c6b-eth0" Oct 30 00:05:14.683964 containerd[1510]: 2025-10-30 00:05:14.651 [INFO][3791] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="81a2a302e216fe22c21e5b2e766f00af3cd7abdcd95ea305959f8b5429775b5e" Namespace="calico-apiserver" Pod="calico-apiserver-7668ff9dd9-98c6b" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-calico--apiserver--7668ff9dd9--98c6b-eth0" Oct 30 00:05:14.684035 containerd[1510]: 2025-10-30 00:05:14.651 [INFO][3791] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="81a2a302e216fe22c21e5b2e766f00af3cd7abdcd95ea305959f8b5429775b5e" Namespace="calico-apiserver" Pod="calico-apiserver-7668ff9dd9-98c6b" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-calico--apiserver--7668ff9dd9--98c6b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--959986c1c8-k8s-calico--apiserver--7668ff9dd9--98c6b-eth0", GenerateName:"calico-apiserver-7668ff9dd9-", Namespace:"calico-apiserver", SelfLink:"", UID:"7fdc9fa2-26e8-4238-9f3b-2e6c25ad7e62", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 4, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7668ff9dd9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-959986c1c8", ContainerID:"81a2a302e216fe22c21e5b2e766f00af3cd7abdcd95ea305959f8b5429775b5e", Pod:"calico-apiserver-7668ff9dd9-98c6b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.118.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid50ab6d4230", MAC:"d6:ea:2a:ef:0c:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:05:14.684090 containerd[1510]: 2025-10-30 00:05:14.675 [INFO][3791] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="81a2a302e216fe22c21e5b2e766f00af3cd7abdcd95ea305959f8b5429775b5e" Namespace="calico-apiserver" Pod="calico-apiserver-7668ff9dd9-98c6b" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-calico--apiserver--7668ff9dd9--98c6b-eth0" Oct 30 00:05:14.771795 systemd[1]: Started cri-containerd-51e2ccdb46a37ea9fe41dd1fccf7877b55198cb6e31d1ce9d1cdba17b19565c5.scope - libcontainer container 51e2ccdb46a37ea9fe41dd1fccf7877b55198cb6e31d1ce9d1cdba17b19565c5. Oct 30 00:05:14.778525 containerd[1510]: time="2025-10-30T00:05:14.778387048Z" level=info msg="connecting to shim 81a2a302e216fe22c21e5b2e766f00af3cd7abdcd95ea305959f8b5429775b5e" address="unix:///run/containerd/s/333ccd6e70bdd076eee3ae85eb78dedd9bd7668d1130b87e1314da05336f6c66" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:05:14.833676 systemd[1]: Started cri-containerd-81a2a302e216fe22c21e5b2e766f00af3cd7abdcd95ea305959f8b5429775b5e.scope - libcontainer container 81a2a302e216fe22c21e5b2e766f00af3cd7abdcd95ea305959f8b5429775b5e. Oct 30 00:05:14.901812 containerd[1510]: time="2025-10-30T00:05:14.901349865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-548d886cd6-g6b4q,Uid:05f3fe96-a4e2-497a-aa78-f94004b3a92a,Namespace:calico-system,Attempt:0,} returns sandbox id \"51e2ccdb46a37ea9fe41dd1fccf7877b55198cb6e31d1ce9d1cdba17b19565c5\"" Oct 30 00:05:14.905779 containerd[1510]: time="2025-10-30T00:05:14.905567230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 00:05:14.967003 containerd[1510]: time="2025-10-30T00:05:14.966929694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7668ff9dd9-98c6b,Uid:7fdc9fa2-26e8-4238-9f3b-2e6c25ad7e62,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"81a2a302e216fe22c21e5b2e766f00af3cd7abdcd95ea305959f8b5429775b5e\"" Oct 30 00:05:15.102057 containerd[1510]: time="2025-10-30T00:05:15.102000672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wtvtg,Uid:7ce87a9a-4a9f-4e2a-b7f9-1e809a938d71,Namespace:calico-system,Attempt:0,}" Oct 30 00:05:15.144815 kubelet[2689]: I1030 00:05:15.142849 2689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbe97583-c6f6-4157-9d17-86a8d42b9d6d" path="/var/lib/kubelet/pods/fbe97583-c6f6-4157-9d17-86a8d42b9d6d/volumes" Oct 30 00:05:15.265233 containerd[1510]: time="2025-10-30T00:05:15.264619585Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:15.265726 containerd[1510]: time="2025-10-30T00:05:15.265569861Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 00:05:15.265726 containerd[1510]: time="2025-10-30T00:05:15.265688940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 00:05:15.276252 kubelet[2689]: E1030 00:05:15.270272 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:05:15.278625 kubelet[2689]: E1030 00:05:15.278177 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:05:15.286964 kubelet[2689]: E1030 00:05:15.286182 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:55498d7b2df74b079d072fc32427c68e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-grsrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-548d886cd6-g6b4q_calico-system(05f3fe96-a4e2-497a-aa78-f94004b3a92a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:15.299515 containerd[1510]: time="2025-10-30T00:05:15.299285548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:05:15.363684 kubelet[2689]: E1030 00:05:15.363646 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:05:15.441402 systemd-networkd[1424]: calie4e4d1019ce: Link UP Oct 30 00:05:15.451921 systemd-networkd[1424]: calie4e4d1019ce: Gained carrier Oct 30 00:05:15.490661 containerd[1510]: 2025-10-30 00:05:15.201 [INFO][4029] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 00:05:15.490661 containerd[1510]: 2025-10-30 00:05:15.224 [INFO][4029] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--959986c1c8-k8s-goldmane--666569f655--wtvtg-eth0 goldmane-666569f655- calico-system 7ce87a9a-4a9f-4e2a-b7f9-1e809a938d71 875 0 2025-10-30 00:04:50 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.1.0-n-959986c1c8 goldmane-666569f655-wtvtg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie4e4d1019ce [] [] }} ContainerID="8f7421b1ad564330c264f1ea4497596e1da56b51e94d25f398a5ba037e29ad7b" Namespace="calico-system" Pod="goldmane-666569f655-wtvtg" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-goldmane--666569f655--wtvtg-" Oct 30 00:05:15.490661 containerd[1510]: 2025-10-30 00:05:15.224 [INFO][4029] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f7421b1ad564330c264f1ea4497596e1da56b51e94d25f398a5ba037e29ad7b" Namespace="calico-system" Pod="goldmane-666569f655-wtvtg" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-goldmane--666569f655--wtvtg-eth0" Oct 30 00:05:15.490661 containerd[1510]: 2025-10-30 00:05:15.325 [INFO][4046] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f7421b1ad564330c264f1ea4497596e1da56b51e94d25f398a5ba037e29ad7b" HandleID="k8s-pod-network.8f7421b1ad564330c264f1ea4497596e1da56b51e94d25f398a5ba037e29ad7b" Workload="ci--4459.1.0--n--959986c1c8-k8s-goldmane--666569f655--wtvtg-eth0" Oct 30 00:05:15.490983 containerd[1510]: 2025-10-30 00:05:15.327 [INFO][4046] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8f7421b1ad564330c264f1ea4497596e1da56b51e94d25f398a5ba037e29ad7b" HandleID="k8s-pod-network.8f7421b1ad564330c264f1ea4497596e1da56b51e94d25f398a5ba037e29ad7b" Workload="ci--4459.1.0--n--959986c1c8-k8s-goldmane--666569f655--wtvtg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000405b30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-n-959986c1c8", "pod":"goldmane-666569f655-wtvtg", "timestamp":"2025-10-30 00:05:15.325735169 +0000 UTC"}, Hostname:"ci-4459.1.0-n-959986c1c8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:05:15.490983 containerd[1510]: 2025-10-30 00:05:15.327 [INFO][4046] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:05:15.490983 containerd[1510]: 2025-10-30 00:05:15.328 [INFO][4046] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:05:15.490983 containerd[1510]: 2025-10-30 00:05:15.329 [INFO][4046] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-959986c1c8' Oct 30 00:05:15.490983 containerd[1510]: 2025-10-30 00:05:15.341 [INFO][4046] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8f7421b1ad564330c264f1ea4497596e1da56b51e94d25f398a5ba037e29ad7b" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:15.490983 containerd[1510]: 2025-10-30 00:05:15.369 [INFO][4046] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:15.490983 containerd[1510]: 2025-10-30 00:05:15.382 [INFO][4046] ipam/ipam.go 511: Trying affinity for 192.168.118.128/26 host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:15.490983 containerd[1510]: 2025-10-30 00:05:15.388 [INFO][4046] ipam/ipam.go 158: Attempting to load block cidr=192.168.118.128/26 host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:15.490983 containerd[1510]: 2025-10-30 00:05:15.393 [INFO][4046] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.118.128/26 host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:15.491827 containerd[1510]: 2025-10-30 00:05:15.394 [INFO][4046] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.118.128/26 handle="k8s-pod-network.8f7421b1ad564330c264f1ea4497596e1da56b51e94d25f398a5ba037e29ad7b" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:15.491827 containerd[1510]: 2025-10-30 00:05:15.400 [INFO][4046] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8f7421b1ad564330c264f1ea4497596e1da56b51e94d25f398a5ba037e29ad7b Oct 30 00:05:15.491827 containerd[1510]: 2025-10-30 00:05:15.409 [INFO][4046] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.118.128/26 handle="k8s-pod-network.8f7421b1ad564330c264f1ea4497596e1da56b51e94d25f398a5ba037e29ad7b" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:15.491827 containerd[1510]: 2025-10-30 00:05:15.423 [INFO][4046] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.118.131/26] block=192.168.118.128/26 handle="k8s-pod-network.8f7421b1ad564330c264f1ea4497596e1da56b51e94d25f398a5ba037e29ad7b" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:15.491827 containerd[1510]: 2025-10-30 00:05:15.423 [INFO][4046] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.118.131/26] handle="k8s-pod-network.8f7421b1ad564330c264f1ea4497596e1da56b51e94d25f398a5ba037e29ad7b" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:15.491827 containerd[1510]: 2025-10-30 00:05:15.424 [INFO][4046] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:05:15.491827 containerd[1510]: 2025-10-30 00:05:15.424 [INFO][4046] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.118.131/26] IPv6=[] ContainerID="8f7421b1ad564330c264f1ea4497596e1da56b51e94d25f398a5ba037e29ad7b" HandleID="k8s-pod-network.8f7421b1ad564330c264f1ea4497596e1da56b51e94d25f398a5ba037e29ad7b" Workload="ci--4459.1.0--n--959986c1c8-k8s-goldmane--666569f655--wtvtg-eth0" Oct 30 00:05:15.492047 containerd[1510]: 2025-10-30 00:05:15.428 [INFO][4029] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f7421b1ad564330c264f1ea4497596e1da56b51e94d25f398a5ba037e29ad7b" Namespace="calico-system" Pod="goldmane-666569f655-wtvtg" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-goldmane--666569f655--wtvtg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--959986c1c8-k8s-goldmane--666569f655--wtvtg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7ce87a9a-4a9f-4e2a-b7f9-1e809a938d71", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 4, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-959986c1c8", ContainerID:"", Pod:"goldmane-666569f655-wtvtg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.118.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie4e4d1019ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:05:15.492148 containerd[1510]: 2025-10-30 00:05:15.429 [INFO][4029] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.118.131/32] ContainerID="8f7421b1ad564330c264f1ea4497596e1da56b51e94d25f398a5ba037e29ad7b" Namespace="calico-system" Pod="goldmane-666569f655-wtvtg" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-goldmane--666569f655--wtvtg-eth0" Oct 30 00:05:15.492148 containerd[1510]: 2025-10-30 00:05:15.429 [INFO][4029] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4e4d1019ce ContainerID="8f7421b1ad564330c264f1ea4497596e1da56b51e94d25f398a5ba037e29ad7b" Namespace="calico-system" Pod="goldmane-666569f655-wtvtg" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-goldmane--666569f655--wtvtg-eth0" Oct 30 00:05:15.492148 containerd[1510]: 2025-10-30 00:05:15.455 [INFO][4029] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f7421b1ad564330c264f1ea4497596e1da56b51e94d25f398a5ba037e29ad7b" Namespace="calico-system" Pod="goldmane-666569f655-wtvtg" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-goldmane--666569f655--wtvtg-eth0" Oct 30 00:05:15.492722 containerd[1510]: 2025-10-30 00:05:15.460 [INFO][4029] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f7421b1ad564330c264f1ea4497596e1da56b51e94d25f398a5ba037e29ad7b" Namespace="calico-system" Pod="goldmane-666569f655-wtvtg" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-goldmane--666569f655--wtvtg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--959986c1c8-k8s-goldmane--666569f655--wtvtg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7ce87a9a-4a9f-4e2a-b7f9-1e809a938d71", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 4, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-959986c1c8", ContainerID:"8f7421b1ad564330c264f1ea4497596e1da56b51e94d25f398a5ba037e29ad7b", Pod:"goldmane-666569f655-wtvtg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.118.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie4e4d1019ce", MAC:"da:af:1a:4b:1a:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:05:15.494143 containerd[1510]: 2025-10-30 00:05:15.485 [INFO][4029] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f7421b1ad564330c264f1ea4497596e1da56b51e94d25f398a5ba037e29ad7b" Namespace="calico-system" Pod="goldmane-666569f655-wtvtg" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-goldmane--666569f655--wtvtg-eth0" Oct 30 00:05:15.535970 containerd[1510]: time="2025-10-30T00:05:15.535132861Z" level=info msg="connecting to shim 8f7421b1ad564330c264f1ea4497596e1da56b51e94d25f398a5ba037e29ad7b" address="unix:///run/containerd/s/dfe1e4aa45c37fe3f8c145ca2116f96e8529911a666a6550f6612f1446ef7677" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:05:15.586601 systemd-networkd[1424]: cali029eae7c53d: Gained IPv6LL Oct 30 00:05:15.599355 systemd[1]: Started cri-containerd-8f7421b1ad564330c264f1ea4497596e1da56b51e94d25f398a5ba037e29ad7b.scope - libcontainer container 8f7421b1ad564330c264f1ea4497596e1da56b51e94d25f398a5ba037e29ad7b. Oct 30 00:05:15.712997 containerd[1510]: time="2025-10-30T00:05:15.712954864Z" level=info msg="TaskExit event in podsandbox handler container_id:\"17071a5081fb294960f3e2df3891f176f5a1638eaf47c0f1804b1b5326b86aed\" id:\"d6138925b942bdcbdf880d22209a0353e23036011bd32f2c2d3f5c549bb5decf\" pid:4066 exit_status:1 exited_at:{seconds:1761782715 nanos:710281410}" Oct 30 00:05:15.752247 containerd[1510]: time="2025-10-30T00:05:15.752080649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wtvtg,Uid:7ce87a9a-4a9f-4e2a-b7f9-1e809a938d71,Namespace:calico-system,Attempt:0,} returns sandbox id \"8f7421b1ad564330c264f1ea4497596e1da56b51e94d25f398a5ba037e29ad7b\"" Oct 30 00:05:16.011279 systemd-networkd[1424]: vxlan.calico: Link UP Oct 30 00:05:16.011293 systemd-networkd[1424]: vxlan.calico: Gained carrier Oct 30 00:05:16.102295 containerd[1510]: time="2025-10-30T00:05:16.102226241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7vb2j,Uid:06390243-fcd9-4c68-9f88-5b23f795b967,Namespace:calico-system,Attempt:0,}" Oct 30 00:05:16.211701 containerd[1510]: time="2025-10-30T00:05:16.211648725Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:16.213272 containerd[1510]: time="2025-10-30T00:05:16.213220833Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:05:16.213391 containerd[1510]: time="2025-10-30T00:05:16.213316398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:05:16.217132 kubelet[2689]: E1030 00:05:16.214320 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:05:16.217132 kubelet[2689]: E1030 00:05:16.214392 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:05:16.217370 kubelet[2689]: E1030 00:05:16.214651 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-48np5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7668ff9dd9-98c6b_calico-apiserver(7fdc9fa2-26e8-4238-9f3b-2e6c25ad7e62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:16.217990 kubelet[2689]: E1030 00:05:16.217736 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7668ff9dd9-98c6b" podUID="7fdc9fa2-26e8-4238-9f3b-2e6c25ad7e62" Oct 30 00:05:16.220344 containerd[1510]: time="2025-10-30T00:05:16.219777778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 00:05:16.286032 systemd-networkd[1424]: cali24ba481d457: Link UP Oct 30 00:05:16.286982 systemd-networkd[1424]: cali24ba481d457: Gained carrier Oct 30 00:05:16.310215 containerd[1510]: 2025-10-30 00:05:16.177 [INFO][4196] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--959986c1c8-k8s-csi--node--driver--7vb2j-eth0 csi-node-driver- calico-system 06390243-fcd9-4c68-9f88-5b23f795b967 760 0 2025-10-30 00:04:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.1.0-n-959986c1c8 csi-node-driver-7vb2j eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali24ba481d457 [] [] }} ContainerID="d7c4cec87084189e39a57040b798479bfb239f66095ed427d705d2be560b70f8" Namespace="calico-system" Pod="csi-node-driver-7vb2j" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-csi--node--driver--7vb2j-" Oct 30 00:05:16.310215 containerd[1510]: 2025-10-30 00:05:16.177 [INFO][4196] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d7c4cec87084189e39a57040b798479bfb239f66095ed427d705d2be560b70f8" Namespace="calico-system" Pod="csi-node-driver-7vb2j" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-csi--node--driver--7vb2j-eth0" Oct 30 00:05:16.310215 containerd[1510]: 2025-10-30 00:05:16.219 [INFO][4208] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7c4cec87084189e39a57040b798479bfb239f66095ed427d705d2be560b70f8" HandleID="k8s-pod-network.d7c4cec87084189e39a57040b798479bfb239f66095ed427d705d2be560b70f8" Workload="ci--4459.1.0--n--959986c1c8-k8s-csi--node--driver--7vb2j-eth0" Oct 30 00:05:16.310528 containerd[1510]: 2025-10-30 00:05:16.219 [INFO][4208] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d7c4cec87084189e39a57040b798479bfb239f66095ed427d705d2be560b70f8" HandleID="k8s-pod-network.d7c4cec87084189e39a57040b798479bfb239f66095ed427d705d2be560b70f8" Workload="ci--4459.1.0--n--959986c1c8-k8s-csi--node--driver--7vb2j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-n-959986c1c8", "pod":"csi-node-driver-7vb2j", "timestamp":"2025-10-30 00:05:16.219555847 +0000 UTC"}, Hostname:"ci-4459.1.0-n-959986c1c8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:05:16.310528 containerd[1510]: 2025-10-30 00:05:16.220 [INFO][4208] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:05:16.310528 containerd[1510]: 2025-10-30 00:05:16.221 [INFO][4208] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:05:16.310528 containerd[1510]: 2025-10-30 00:05:16.221 [INFO][4208] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-959986c1c8' Oct 30 00:05:16.310528 containerd[1510]: 2025-10-30 00:05:16.236 [INFO][4208] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d7c4cec87084189e39a57040b798479bfb239f66095ed427d705d2be560b70f8" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:16.310528 containerd[1510]: 2025-10-30 00:05:16.242 [INFO][4208] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:16.310528 containerd[1510]: 2025-10-30 00:05:16.248 [INFO][4208] ipam/ipam.go 511: Trying affinity for 192.168.118.128/26 host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:16.310528 containerd[1510]: 2025-10-30 00:05:16.252 [INFO][4208] ipam/ipam.go 158: Attempting to load block cidr=192.168.118.128/26 host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:16.310528 containerd[1510]: 2025-10-30 00:05:16.255 [INFO][4208] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.118.128/26 host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:16.310778 containerd[1510]: 2025-10-30 00:05:16.255 [INFO][4208] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.118.128/26 handle="k8s-pod-network.d7c4cec87084189e39a57040b798479bfb239f66095ed427d705d2be560b70f8" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:16.310778 containerd[1510]: 2025-10-30 00:05:16.257 [INFO][4208] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d7c4cec87084189e39a57040b798479bfb239f66095ed427d705d2be560b70f8 Oct 30 00:05:16.310778 containerd[1510]: 2025-10-30 00:05:16.265 [INFO][4208] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.118.128/26 handle="k8s-pod-network.d7c4cec87084189e39a57040b798479bfb239f66095ed427d705d2be560b70f8" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:16.310778 containerd[1510]: 2025-10-30 00:05:16.277 [INFO][4208] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.118.132/26] block=192.168.118.128/26 handle="k8s-pod-network.d7c4cec87084189e39a57040b798479bfb239f66095ed427d705d2be560b70f8" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:16.310778 containerd[1510]: 2025-10-30 00:05:16.277 [INFO][4208] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.118.132/26] handle="k8s-pod-network.d7c4cec87084189e39a57040b798479bfb239f66095ed427d705d2be560b70f8" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:16.310778 containerd[1510]: 2025-10-30 00:05:16.277 [INFO][4208] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:05:16.310778 containerd[1510]: 2025-10-30 00:05:16.277 [INFO][4208] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.118.132/26] IPv6=[] ContainerID="d7c4cec87084189e39a57040b798479bfb239f66095ed427d705d2be560b70f8" HandleID="k8s-pod-network.d7c4cec87084189e39a57040b798479bfb239f66095ed427d705d2be560b70f8" Workload="ci--4459.1.0--n--959986c1c8-k8s-csi--node--driver--7vb2j-eth0" Oct 30 00:05:16.310930 containerd[1510]: 2025-10-30 00:05:16.281 [INFO][4196] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d7c4cec87084189e39a57040b798479bfb239f66095ed427d705d2be560b70f8" Namespace="calico-system" Pod="csi-node-driver-7vb2j" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-csi--node--driver--7vb2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--959986c1c8-k8s-csi--node--driver--7vb2j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06390243-fcd9-4c68-9f88-5b23f795b967", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-959986c1c8", ContainerID:"", Pod:"csi-node-driver-7vb2j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.118.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali24ba481d457", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:05:16.310997 containerd[1510]: 2025-10-30 00:05:16.281 [INFO][4196] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.118.132/32] ContainerID="d7c4cec87084189e39a57040b798479bfb239f66095ed427d705d2be560b70f8" Namespace="calico-system" Pod="csi-node-driver-7vb2j" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-csi--node--driver--7vb2j-eth0" Oct 30 00:05:16.310997 containerd[1510]: 2025-10-30 00:05:16.281 [INFO][4196] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali24ba481d457 ContainerID="d7c4cec87084189e39a57040b798479bfb239f66095ed427d705d2be560b70f8" Namespace="calico-system" Pod="csi-node-driver-7vb2j" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-csi--node--driver--7vb2j-eth0" Oct 30 00:05:16.310997 containerd[1510]: 2025-10-30 00:05:16.287 [INFO][4196] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7c4cec87084189e39a57040b798479bfb239f66095ed427d705d2be560b70f8" Namespace="calico-system" Pod="csi-node-driver-7vb2j" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-csi--node--driver--7vb2j-eth0" Oct 30 00:05:16.311062 containerd[1510]: 2025-10-30 00:05:16.288 [INFO][4196] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d7c4cec87084189e39a57040b798479bfb239f66095ed427d705d2be560b70f8" Namespace="calico-system" Pod="csi-node-driver-7vb2j" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-csi--node--driver--7vb2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--959986c1c8-k8s-csi--node--driver--7vb2j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06390243-fcd9-4c68-9f88-5b23f795b967", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-959986c1c8", ContainerID:"d7c4cec87084189e39a57040b798479bfb239f66095ed427d705d2be560b70f8", Pod:"csi-node-driver-7vb2j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.118.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali24ba481d457", MAC:"fa:70:6c:af:50:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:05:16.312250 containerd[1510]: 2025-10-30 00:05:16.303 [INFO][4196] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d7c4cec87084189e39a57040b798479bfb239f66095ed427d705d2be560b70f8" Namespace="calico-system" Pod="csi-node-driver-7vb2j" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-csi--node--driver--7vb2j-eth0" Oct 30 00:05:16.340836 containerd[1510]: time="2025-10-30T00:05:16.340753563Z" level=info msg="connecting to shim d7c4cec87084189e39a57040b798479bfb239f66095ed427d705d2be560b70f8" address="unix:///run/containerd/s/b701febe8d5fa39666efda5cf3a388005d66a7fbafde0e03a84189c624413f23" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:05:16.365151 kubelet[2689]: E1030 00:05:16.364569 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7668ff9dd9-98c6b" podUID="7fdc9fa2-26e8-4238-9f3b-2e6c25ad7e62" Oct 30 00:05:16.400669 systemd[1]: Started cri-containerd-d7c4cec87084189e39a57040b798479bfb239f66095ed427d705d2be560b70f8.scope - libcontainer container d7c4cec87084189e39a57040b798479bfb239f66095ed427d705d2be560b70f8. Oct 30 00:05:16.418150 systemd-networkd[1424]: calid50ab6d4230: Gained IPv6LL Oct 30 00:05:16.462711 containerd[1510]: time="2025-10-30T00:05:16.462666320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7vb2j,Uid:06390243-fcd9-4c68-9f88-5b23f795b967,Namespace:calico-system,Attempt:0,} returns sandbox id \"d7c4cec87084189e39a57040b798479bfb239f66095ed427d705d2be560b70f8\"" Oct 30 00:05:16.563435 containerd[1510]: time="2025-10-30T00:05:16.562677437Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:16.563597 containerd[1510]: time="2025-10-30T00:05:16.563454817Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 00:05:16.563597 containerd[1510]: time="2025-10-30T00:05:16.563573201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 00:05:16.564568 kubelet[2689]: E1030 00:05:16.563840 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:05:16.566878 kubelet[2689]: E1030 00:05:16.564363 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:05:16.566878 kubelet[2689]: E1030 00:05:16.565959 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-grsrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-548d886cd6-g6b4q_calico-system(05f3fe96-a4e2-497a-aa78-f94004b3a92a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:16.567418 containerd[1510]: time="2025-10-30T00:05:16.567364674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 00:05:16.567810 kubelet[2689]: E1030 00:05:16.567638 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548d886cd6-g6b4q" podUID="05f3fe96-a4e2-497a-aa78-f94004b3a92a" Oct 30 00:05:16.737435 systemd-networkd[1424]: calie4e4d1019ce: Gained IPv6LL Oct 30 00:05:16.929059 containerd[1510]: time="2025-10-30T00:05:16.928585580Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:16.929994 containerd[1510]: time="2025-10-30T00:05:16.929489267Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 00:05:16.929994 containerd[1510]: time="2025-10-30T00:05:16.929572837Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 00:05:16.931126 kubelet[2689]: E1030 00:05:16.930288 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:05:16.931126 kubelet[2689]: E1030 00:05:16.930355 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:05:16.931126 kubelet[2689]: E1030 00:05:16.930601 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxlc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wtvtg_calico-system(7ce87a9a-4a9f-4e2a-b7f9-1e809a938d71): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:16.931761 kubelet[2689]: E1030 00:05:16.931721 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wtvtg" podUID="7ce87a9a-4a9f-4e2a-b7f9-1e809a938d71" Oct 30 00:05:16.932454 containerd[1510]: time="2025-10-30T00:05:16.932235748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 00:05:17.095199 kubelet[2689]: E1030 00:05:17.094701 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:05:17.095446 containerd[1510]: time="2025-10-30T00:05:17.094990370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c6b9bd746-st5j9,Uid:d930ac2e-f4f2-4b3f-a87d-015fa72b1a3c,Namespace:calico-system,Attempt:0,}" Oct 30 00:05:17.095865 containerd[1510]: time="2025-10-30T00:05:17.095422512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lqr5,Uid:d0e62754-eb00-4b08-8cc7-2b7fa22525b9,Namespace:kube-system,Attempt:0,}" Oct 30 00:05:17.096603 containerd[1510]: time="2025-10-30T00:05:17.096572936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7668ff9dd9-jn9tg,Uid:041ed311-1a2e-462d-ace8-65f00add4557,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:05:17.296239 containerd[1510]: time="2025-10-30T00:05:17.295446759Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:17.297815 containerd[1510]: time="2025-10-30T00:05:17.297687456Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 00:05:17.298669 containerd[1510]: time="2025-10-30T00:05:17.297823041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 00:05:17.301204 kubelet[2689]: E1030 00:05:17.298507 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:05:17.301204 kubelet[2689]: E1030 00:05:17.300522 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:05:17.301204 kubelet[2689]: E1030 00:05:17.300896 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x52zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7vb2j_calico-system(06390243-fcd9-4c68-9f88-5b23f795b967): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:17.305956 containerd[1510]: time="2025-10-30T00:05:17.305913711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 00:05:17.377946 kubelet[2689]: E1030 00:05:17.377701 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wtvtg" podUID="7ce87a9a-4a9f-4e2a-b7f9-1e809a938d71" Oct 30 00:05:17.381736 kubelet[2689]: E1030 00:05:17.381031 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548d886cd6-g6b4q" podUID="05f3fe96-a4e2-497a-aa78-f94004b3a92a" Oct 30 00:05:17.432768 systemd-networkd[1424]: cali3e620e42025: Link UP Oct 30 00:05:17.434698 systemd-networkd[1424]: cali3e620e42025: Gained carrier Oct 30 00:05:17.469684 containerd[1510]: 2025-10-30 00:05:17.219 [INFO][4312] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--959986c1c8-k8s-coredns--668d6bf9bc--7lqr5-eth0 coredns-668d6bf9bc- kube-system d0e62754-eb00-4b08-8cc7-2b7fa22525b9 876 0 2025-10-30 00:04:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.1.0-n-959986c1c8 coredns-668d6bf9bc-7lqr5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3e620e42025 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8332f707b425199a8ab3d273136c072f49d416c28a553494bd75b137fe0659d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lqr5" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-coredns--668d6bf9bc--7lqr5-" Oct 30 00:05:17.469684 containerd[1510]: 2025-10-30 00:05:17.220 [INFO][4312] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8332f707b425199a8ab3d273136c072f49d416c28a553494bd75b137fe0659d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lqr5" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-coredns--668d6bf9bc--7lqr5-eth0" Oct 30 00:05:17.469684 containerd[1510]: 2025-10-30 00:05:17.329 [INFO][4352] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8332f707b425199a8ab3d273136c072f49d416c28a553494bd75b137fe0659d0" HandleID="k8s-pod-network.8332f707b425199a8ab3d273136c072f49d416c28a553494bd75b137fe0659d0" Workload="ci--4459.1.0--n--959986c1c8-k8s-coredns--668d6bf9bc--7lqr5-eth0" Oct 30 00:05:17.469986 containerd[1510]: 2025-10-30 00:05:17.330 [INFO][4352] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8332f707b425199a8ab3d273136c072f49d416c28a553494bd75b137fe0659d0" HandleID="k8s-pod-network.8332f707b425199a8ab3d273136c072f49d416c28a553494bd75b137fe0659d0" Workload="ci--4459.1.0--n--959986c1c8-k8s-coredns--668d6bf9bc--7lqr5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000312140), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.1.0-n-959986c1c8", "pod":"coredns-668d6bf9bc-7lqr5", "timestamp":"2025-10-30 00:05:17.329475887 +0000 UTC"}, Hostname:"ci-4459.1.0-n-959986c1c8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:05:17.469986 containerd[1510]: 2025-10-30 00:05:17.331 [INFO][4352] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:05:17.469986 containerd[1510]: 2025-10-30 00:05:17.332 [INFO][4352] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:05:17.469986 containerd[1510]: 2025-10-30 00:05:17.332 [INFO][4352] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-959986c1c8' Oct 30 00:05:17.469986 containerd[1510]: 2025-10-30 00:05:17.348 [INFO][4352] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8332f707b425199a8ab3d273136c072f49d416c28a553494bd75b137fe0659d0" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:17.469986 containerd[1510]: 2025-10-30 00:05:17.356 [INFO][4352] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:17.469986 containerd[1510]: 2025-10-30 00:05:17.363 [INFO][4352] ipam/ipam.go 511: Trying affinity for 192.168.118.128/26 host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:17.469986 containerd[1510]: 2025-10-30 00:05:17.366 [INFO][4352] ipam/ipam.go 158: Attempting to load block cidr=192.168.118.128/26 host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:17.469986 containerd[1510]: 2025-10-30 00:05:17.373 [INFO][4352] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.118.128/26 host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:17.470253 containerd[1510]: 2025-10-30 00:05:17.374 [INFO][4352] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.118.128/26 handle="k8s-pod-network.8332f707b425199a8ab3d273136c072f49d416c28a553494bd75b137fe0659d0" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:17.470253 containerd[1510]: 2025-10-30 00:05:17.387 [INFO][4352] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8332f707b425199a8ab3d273136c072f49d416c28a553494bd75b137fe0659d0 Oct 30 00:05:17.470253 containerd[1510]: 2025-10-30 00:05:17.392 [INFO][4352] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.118.128/26 handle="k8s-pod-network.8332f707b425199a8ab3d273136c072f49d416c28a553494bd75b137fe0659d0" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:17.470253 containerd[1510]: 2025-10-30 00:05:17.409 [INFO][4352] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.118.133/26] block=192.168.118.128/26 handle="k8s-pod-network.8332f707b425199a8ab3d273136c072f49d416c28a553494bd75b137fe0659d0" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:17.470253 containerd[1510]: 2025-10-30 00:05:17.409 [INFO][4352] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.118.133/26] handle="k8s-pod-network.8332f707b425199a8ab3d273136c072f49d416c28a553494bd75b137fe0659d0" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:17.470253 containerd[1510]: 2025-10-30 00:05:17.409 [INFO][4352] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:05:17.470253 containerd[1510]: 2025-10-30 00:05:17.409 [INFO][4352] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.118.133/26] IPv6=[] ContainerID="8332f707b425199a8ab3d273136c072f49d416c28a553494bd75b137fe0659d0" HandleID="k8s-pod-network.8332f707b425199a8ab3d273136c072f49d416c28a553494bd75b137fe0659d0" Workload="ci--4459.1.0--n--959986c1c8-k8s-coredns--668d6bf9bc--7lqr5-eth0" Oct 30 00:05:17.470469 containerd[1510]: 2025-10-30 00:05:17.415 [INFO][4312] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8332f707b425199a8ab3d273136c072f49d416c28a553494bd75b137fe0659d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lqr5" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-coredns--668d6bf9bc--7lqr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--959986c1c8-k8s-coredns--668d6bf9bc--7lqr5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d0e62754-eb00-4b08-8cc7-2b7fa22525b9", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 4, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-959986c1c8", ContainerID:"", Pod:"coredns-668d6bf9bc-7lqr5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e620e42025", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:05:17.470469 containerd[1510]: 2025-10-30 00:05:17.416 [INFO][4312] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.118.133/32] ContainerID="8332f707b425199a8ab3d273136c072f49d416c28a553494bd75b137fe0659d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lqr5" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-coredns--668d6bf9bc--7lqr5-eth0" Oct 30 00:05:17.470469 containerd[1510]: 2025-10-30 00:05:17.416 [INFO][4312] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e620e42025 ContainerID="8332f707b425199a8ab3d273136c072f49d416c28a553494bd75b137fe0659d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lqr5" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-coredns--668d6bf9bc--7lqr5-eth0" Oct 30 00:05:17.470469 containerd[1510]: 2025-10-30 00:05:17.435 [INFO][4312] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8332f707b425199a8ab3d273136c072f49d416c28a553494bd75b137fe0659d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lqr5" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-coredns--668d6bf9bc--7lqr5-eth0" Oct 30 00:05:17.470469 containerd[1510]: 2025-10-30 00:05:17.438 [INFO][4312] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8332f707b425199a8ab3d273136c072f49d416c28a553494bd75b137fe0659d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lqr5" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-coredns--668d6bf9bc--7lqr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--959986c1c8-k8s-coredns--668d6bf9bc--7lqr5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d0e62754-eb00-4b08-8cc7-2b7fa22525b9", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 4, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-959986c1c8", ContainerID:"8332f707b425199a8ab3d273136c072f49d416c28a553494bd75b137fe0659d0", Pod:"coredns-668d6bf9bc-7lqr5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e620e42025", MAC:"82:36:89:3b:14:f2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:05:17.470469 containerd[1510]: 2025-10-30 00:05:17.463 [INFO][4312] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8332f707b425199a8ab3d273136c072f49d416c28a553494bd75b137fe0659d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lqr5" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-coredns--668d6bf9bc--7lqr5-eth0" Oct 30 00:05:17.520715 containerd[1510]: time="2025-10-30T00:05:17.520448218Z" level=info msg="connecting to shim 8332f707b425199a8ab3d273136c072f49d416c28a553494bd75b137fe0659d0" address="unix:///run/containerd/s/70ad38199d86ae391cdd80e3629fe1b619c91f97343e30e2c62fe556c4a01734" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:05:17.583141 systemd-networkd[1424]: cali0cd2de2552a: Link UP Oct 30 00:05:17.584613 systemd-networkd[1424]: cali0cd2de2552a: Gained carrier Oct 30 00:05:17.638900 systemd[1]: Started cri-containerd-8332f707b425199a8ab3d273136c072f49d416c28a553494bd75b137fe0659d0.scope - libcontainer container 8332f707b425199a8ab3d273136c072f49d416c28a553494bd75b137fe0659d0. Oct 30 00:05:17.650600 containerd[1510]: 2025-10-30 00:05:17.231 [INFO][4319] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--959986c1c8-k8s-calico--kube--controllers--7c6b9bd746--st5j9-eth0 calico-kube-controllers-7c6b9bd746- calico-system d930ac2e-f4f2-4b3f-a87d-015fa72b1a3c 878 0 2025-10-30 00:04:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c6b9bd746 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.1.0-n-959986c1c8 calico-kube-controllers-7c6b9bd746-st5j9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0cd2de2552a [] [] }} ContainerID="23f7d444e40905c22780909f35f8c8735230d3a2f5c45e086e680459df478d00" Namespace="calico-system" Pod="calico-kube-controllers-7c6b9bd746-st5j9" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-calico--kube--controllers--7c6b9bd746--st5j9-" Oct 30 00:05:17.650600 containerd[1510]: 2025-10-30 00:05:17.231 [INFO][4319] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="23f7d444e40905c22780909f35f8c8735230d3a2f5c45e086e680459df478d00" Namespace="calico-system" Pod="calico-kube-controllers-7c6b9bd746-st5j9" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-calico--kube--controllers--7c6b9bd746--st5j9-eth0" Oct 30 00:05:17.650600 containerd[1510]: 2025-10-30 00:05:17.336 [INFO][4357] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="23f7d444e40905c22780909f35f8c8735230d3a2f5c45e086e680459df478d00" HandleID="k8s-pod-network.23f7d444e40905c22780909f35f8c8735230d3a2f5c45e086e680459df478d00" Workload="ci--4459.1.0--n--959986c1c8-k8s-calico--kube--controllers--7c6b9bd746--st5j9-eth0" Oct 30 00:05:17.650600 containerd[1510]: 2025-10-30 00:05:17.337 [INFO][4357] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="23f7d444e40905c22780909f35f8c8735230d3a2f5c45e086e680459df478d00" HandleID="k8s-pod-network.23f7d444e40905c22780909f35f8c8735230d3a2f5c45e086e680459df478d00" Workload="ci--4459.1.0--n--959986c1c8-k8s-calico--kube--controllers--7c6b9bd746--st5j9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003296e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-n-959986c1c8", "pod":"calico-kube-controllers-7c6b9bd746-st5j9", "timestamp":"2025-10-30 00:05:17.336159873 +0000 UTC"}, Hostname:"ci-4459.1.0-n-959986c1c8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:05:17.650600 containerd[1510]: 2025-10-30 00:05:17.337 [INFO][4357] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:05:17.650600 containerd[1510]: 2025-10-30 00:05:17.409 [INFO][4357] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:05:17.650600 containerd[1510]: 2025-10-30 00:05:17.410 [INFO][4357] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-959986c1c8' Oct 30 00:05:17.650600 containerd[1510]: 2025-10-30 00:05:17.453 [INFO][4357] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.23f7d444e40905c22780909f35f8c8735230d3a2f5c45e086e680459df478d00" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:17.650600 containerd[1510]: 2025-10-30 00:05:17.473 [INFO][4357] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:17.650600 containerd[1510]: 2025-10-30 00:05:17.486 [INFO][4357] ipam/ipam.go 511: Trying affinity for 192.168.118.128/26 host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:17.650600 containerd[1510]: 2025-10-30 00:05:17.495 [INFO][4357] ipam/ipam.go 158: Attempting to load block cidr=192.168.118.128/26 host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:17.650600 containerd[1510]: 2025-10-30 00:05:17.504 [INFO][4357] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.118.128/26 host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:17.650600 containerd[1510]: 2025-10-30 00:05:17.504 [INFO][4357] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.118.128/26 handle="k8s-pod-network.23f7d444e40905c22780909f35f8c8735230d3a2f5c45e086e680459df478d00" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:17.650600 containerd[1510]: 2025-10-30 00:05:17.514 [INFO][4357] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.23f7d444e40905c22780909f35f8c8735230d3a2f5c45e086e680459df478d00 Oct 30 00:05:17.650600 containerd[1510]: 2025-10-30 00:05:17.533 [INFO][4357] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.118.128/26 handle="k8s-pod-network.23f7d444e40905c22780909f35f8c8735230d3a2f5c45e086e680459df478d00" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:17.650600 containerd[1510]: 2025-10-30 00:05:17.557 [INFO][4357] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.118.134/26] block=192.168.118.128/26 handle="k8s-pod-network.23f7d444e40905c22780909f35f8c8735230d3a2f5c45e086e680459df478d00" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:17.650600 containerd[1510]: 2025-10-30 00:05:17.557 [INFO][4357] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.118.134/26] handle="k8s-pod-network.23f7d444e40905c22780909f35f8c8735230d3a2f5c45e086e680459df478d00" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:17.650600 containerd[1510]: 2025-10-30 00:05:17.557 [INFO][4357] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:05:17.650600 containerd[1510]: 2025-10-30 00:05:17.557 [INFO][4357] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.118.134/26] IPv6=[] ContainerID="23f7d444e40905c22780909f35f8c8735230d3a2f5c45e086e680459df478d00" HandleID="k8s-pod-network.23f7d444e40905c22780909f35f8c8735230d3a2f5c45e086e680459df478d00" Workload="ci--4459.1.0--n--959986c1c8-k8s-calico--kube--controllers--7c6b9bd746--st5j9-eth0" Oct 30 00:05:17.651722 containerd[1510]: 2025-10-30 00:05:17.572 [INFO][4319] cni-plugin/k8s.go 418: Populated endpoint ContainerID="23f7d444e40905c22780909f35f8c8735230d3a2f5c45e086e680459df478d00" Namespace="calico-system" Pod="calico-kube-controllers-7c6b9bd746-st5j9" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-calico--kube--controllers--7c6b9bd746--st5j9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--959986c1c8-k8s-calico--kube--controllers--7c6b9bd746--st5j9-eth0", GenerateName:"calico-kube-controllers-7c6b9bd746-", Namespace:"calico-system", SelfLink:"", UID:"d930ac2e-f4f2-4b3f-a87d-015fa72b1a3c", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c6b9bd746", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-959986c1c8", ContainerID:"", Pod:"calico-kube-controllers-7c6b9bd746-st5j9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.118.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0cd2de2552a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:05:17.651722 containerd[1510]: 2025-10-30 00:05:17.572 [INFO][4319] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.118.134/32] ContainerID="23f7d444e40905c22780909f35f8c8735230d3a2f5c45e086e680459df478d00" Namespace="calico-system" Pod="calico-kube-controllers-7c6b9bd746-st5j9" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-calico--kube--controllers--7c6b9bd746--st5j9-eth0" Oct 30 00:05:17.651722 containerd[1510]: 2025-10-30 00:05:17.575 [INFO][4319] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0cd2de2552a ContainerID="23f7d444e40905c22780909f35f8c8735230d3a2f5c45e086e680459df478d00" Namespace="calico-system" Pod="calico-kube-controllers-7c6b9bd746-st5j9" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-calico--kube--controllers--7c6b9bd746--st5j9-eth0" Oct 30 00:05:17.651722 containerd[1510]: 2025-10-30 00:05:17.583 [INFO][4319] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="23f7d444e40905c22780909f35f8c8735230d3a2f5c45e086e680459df478d00" Namespace="calico-system" Pod="calico-kube-controllers-7c6b9bd746-st5j9" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-calico--kube--controllers--7c6b9bd746--st5j9-eth0" Oct 30 00:05:17.651722 containerd[1510]: 2025-10-30 00:05:17.585 [INFO][4319] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="23f7d444e40905c22780909f35f8c8735230d3a2f5c45e086e680459df478d00" Namespace="calico-system" Pod="calico-kube-controllers-7c6b9bd746-st5j9" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-calico--kube--controllers--7c6b9bd746--st5j9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--959986c1c8-k8s-calico--kube--controllers--7c6b9bd746--st5j9-eth0", GenerateName:"calico-kube-controllers-7c6b9bd746-", Namespace:"calico-system", SelfLink:"", UID:"d930ac2e-f4f2-4b3f-a87d-015fa72b1a3c", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c6b9bd746", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-959986c1c8", ContainerID:"23f7d444e40905c22780909f35f8c8735230d3a2f5c45e086e680459df478d00", Pod:"calico-kube-controllers-7c6b9bd746-st5j9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.118.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0cd2de2552a", MAC:"86:c7:18:a1:0b:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:05:17.651722 containerd[1510]: 2025-10-30 00:05:17.636 [INFO][4319] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="23f7d444e40905c22780909f35f8c8735230d3a2f5c45e086e680459df478d00" Namespace="calico-system" Pod="calico-kube-controllers-7c6b9bd746-st5j9" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-calico--kube--controllers--7c6b9bd746--st5j9-eth0" Oct 30 00:05:17.715133 containerd[1510]: time="2025-10-30T00:05:17.714945604Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:17.720324 containerd[1510]: time="2025-10-30T00:05:17.720242338Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 00:05:17.720480 containerd[1510]: time="2025-10-30T00:05:17.720459372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 00:05:17.720900 kubelet[2689]: E1030 00:05:17.720835 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:05:17.721076 kubelet[2689]: E1030 00:05:17.720996 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:05:17.721493 kubelet[2689]: E1030 00:05:17.721345 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x52zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7vb2j_calico-system(06390243-fcd9-4c68-9f88-5b23f795b967): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:17.722932 kubelet[2689]: E1030 00:05:17.722882 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7vb2j" podUID="06390243-fcd9-4c68-9f88-5b23f795b967" Oct 30 00:05:17.726746 containerd[1510]: time="2025-10-30T00:05:17.726163710Z" level=info msg="connecting to shim 23f7d444e40905c22780909f35f8c8735230d3a2f5c45e086e680459df478d00" address="unix:///run/containerd/s/c3642f6e3717f3e4171f8cd3f1d2467e86cb12d7b3ac9ccd6226dfdd50638f29" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:05:17.761290 systemd-networkd[1424]: vxlan.calico: Gained IPv6LL Oct 30 00:05:17.766524 systemd[1]: Started cri-containerd-23f7d444e40905c22780909f35f8c8735230d3a2f5c45e086e680459df478d00.scope - libcontainer container 23f7d444e40905c22780909f35f8c8735230d3a2f5c45e086e680459df478d00. Oct 30 00:05:17.799356 systemd-networkd[1424]: cali62f4abc33c1: Link UP Oct 30 00:05:17.808196 systemd-networkd[1424]: cali62f4abc33c1: Gained carrier Oct 30 00:05:17.857746 containerd[1510]: time="2025-10-30T00:05:17.857616309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lqr5,Uid:d0e62754-eb00-4b08-8cc7-2b7fa22525b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"8332f707b425199a8ab3d273136c072f49d416c28a553494bd75b137fe0659d0\"" Oct 30 00:05:17.861479 kubelet[2689]: E1030 00:05:17.861192 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:05:17.866527 containerd[1510]: 2025-10-30 00:05:17.256 [INFO][4329] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--959986c1c8-k8s-calico--apiserver--7668ff9dd9--jn9tg-eth0 calico-apiserver-7668ff9dd9- calico-apiserver 041ed311-1a2e-462d-ace8-65f00add4557 866 0 2025-10-30 00:04:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7668ff9dd9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.1.0-n-959986c1c8 calico-apiserver-7668ff9dd9-jn9tg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali62f4abc33c1 [] [] }} ContainerID="843345525fffdf6e1553b5fe2a46b4fae14a3250602e71b3a776a6728ae76dce" Namespace="calico-apiserver" Pod="calico-apiserver-7668ff9dd9-jn9tg" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-calico--apiserver--7668ff9dd9--jn9tg-" Oct 30 00:05:17.866527 containerd[1510]: 2025-10-30 00:05:17.256 [INFO][4329] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="843345525fffdf6e1553b5fe2a46b4fae14a3250602e71b3a776a6728ae76dce" Namespace="calico-apiserver" Pod="calico-apiserver-7668ff9dd9-jn9tg" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-calico--apiserver--7668ff9dd9--jn9tg-eth0" Oct 30 00:05:17.866527 containerd[1510]: 2025-10-30 00:05:17.348 [INFO][4362] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="843345525fffdf6e1553b5fe2a46b4fae14a3250602e71b3a776a6728ae76dce" HandleID="k8s-pod-network.843345525fffdf6e1553b5fe2a46b4fae14a3250602e71b3a776a6728ae76dce" Workload="ci--4459.1.0--n--959986c1c8-k8s-calico--apiserver--7668ff9dd9--jn9tg-eth0" Oct 30 00:05:17.866527 containerd[1510]: 2025-10-30 00:05:17.348 [INFO][4362] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="843345525fffdf6e1553b5fe2a46b4fae14a3250602e71b3a776a6728ae76dce" HandleID="k8s-pod-network.843345525fffdf6e1553b5fe2a46b4fae14a3250602e71b3a776a6728ae76dce" Workload="ci--4459.1.0--n--959986c1c8-k8s-calico--apiserver--7668ff9dd9--jn9tg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5000), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.1.0-n-959986c1c8", "pod":"calico-apiserver-7668ff9dd9-jn9tg", "timestamp":"2025-10-30 00:05:17.348062709 +0000 UTC"}, Hostname:"ci-4459.1.0-n-959986c1c8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:05:17.866527 containerd[1510]: 2025-10-30 00:05:17.348 [INFO][4362] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:05:17.866527 containerd[1510]: 2025-10-30 00:05:17.557 [INFO][4362] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:05:17.866527 containerd[1510]: 2025-10-30 00:05:17.562 [INFO][4362] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-959986c1c8' Oct 30 00:05:17.866527 containerd[1510]: 2025-10-30 00:05:17.629 [INFO][4362] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.843345525fffdf6e1553b5fe2a46b4fae14a3250602e71b3a776a6728ae76dce" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:17.866527 containerd[1510]: 2025-10-30 00:05:17.660 [INFO][4362] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:17.866527 containerd[1510]: 2025-10-30 00:05:17.674 [INFO][4362] ipam/ipam.go 511: Trying affinity for 192.168.118.128/26 host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:17.866527 containerd[1510]: 2025-10-30 00:05:17.690 [INFO][4362] ipam/ipam.go 158: Attempting to load block cidr=192.168.118.128/26 host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:17.866527 containerd[1510]: 2025-10-30 00:05:17.705 [INFO][4362] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.118.128/26 host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:17.866527 containerd[1510]: 2025-10-30 00:05:17.705 [INFO][4362] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.118.128/26 handle="k8s-pod-network.843345525fffdf6e1553b5fe2a46b4fae14a3250602e71b3a776a6728ae76dce" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:17.866527 containerd[1510]: 2025-10-30 00:05:17.717 [INFO][4362] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.843345525fffdf6e1553b5fe2a46b4fae14a3250602e71b3a776a6728ae76dce Oct 30 00:05:17.866527 containerd[1510]: 2025-10-30 00:05:17.743 [INFO][4362] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.118.128/26 handle="k8s-pod-network.843345525fffdf6e1553b5fe2a46b4fae14a3250602e71b3a776a6728ae76dce" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:17.866527 containerd[1510]: 2025-10-30 00:05:17.775 [INFO][4362] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.118.135/26] block=192.168.118.128/26 handle="k8s-pod-network.843345525fffdf6e1553b5fe2a46b4fae14a3250602e71b3a776a6728ae76dce" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:17.866527 containerd[1510]: 2025-10-30 00:05:17.776 [INFO][4362] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.118.135/26] handle="k8s-pod-network.843345525fffdf6e1553b5fe2a46b4fae14a3250602e71b3a776a6728ae76dce" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:17.866527 containerd[1510]: 2025-10-30 00:05:17.776 [INFO][4362] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:05:17.866527 containerd[1510]: 2025-10-30 00:05:17.776 [INFO][4362] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.118.135/26] IPv6=[] ContainerID="843345525fffdf6e1553b5fe2a46b4fae14a3250602e71b3a776a6728ae76dce" HandleID="k8s-pod-network.843345525fffdf6e1553b5fe2a46b4fae14a3250602e71b3a776a6728ae76dce" Workload="ci--4459.1.0--n--959986c1c8-k8s-calico--apiserver--7668ff9dd9--jn9tg-eth0" Oct 30 00:05:17.867649 containerd[1510]: 2025-10-30 00:05:17.783 [INFO][4329] cni-plugin/k8s.go 418: Populated endpoint ContainerID="843345525fffdf6e1553b5fe2a46b4fae14a3250602e71b3a776a6728ae76dce" Namespace="calico-apiserver" Pod="calico-apiserver-7668ff9dd9-jn9tg" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-calico--apiserver--7668ff9dd9--jn9tg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--959986c1c8-k8s-calico--apiserver--7668ff9dd9--jn9tg-eth0", GenerateName:"calico-apiserver-7668ff9dd9-", Namespace:"calico-apiserver", SelfLink:"", UID:"041ed311-1a2e-462d-ace8-65f00add4557", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 4, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7668ff9dd9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-959986c1c8", ContainerID:"", Pod:"calico-apiserver-7668ff9dd9-jn9tg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.118.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali62f4abc33c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:05:17.867649 containerd[1510]: 2025-10-30 00:05:17.784 [INFO][4329] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.118.135/32] ContainerID="843345525fffdf6e1553b5fe2a46b4fae14a3250602e71b3a776a6728ae76dce" Namespace="calico-apiserver" Pod="calico-apiserver-7668ff9dd9-jn9tg" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-calico--apiserver--7668ff9dd9--jn9tg-eth0" Oct 30 00:05:17.867649 containerd[1510]: 2025-10-30 00:05:17.784 [INFO][4329] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62f4abc33c1 ContainerID="843345525fffdf6e1553b5fe2a46b4fae14a3250602e71b3a776a6728ae76dce" Namespace="calico-apiserver" Pod="calico-apiserver-7668ff9dd9-jn9tg" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-calico--apiserver--7668ff9dd9--jn9tg-eth0" Oct 30 00:05:17.867649 containerd[1510]: 2025-10-30 00:05:17.811 [INFO][4329] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="843345525fffdf6e1553b5fe2a46b4fae14a3250602e71b3a776a6728ae76dce" Namespace="calico-apiserver" Pod="calico-apiserver-7668ff9dd9-jn9tg" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-calico--apiserver--7668ff9dd9--jn9tg-eth0" Oct 30 00:05:17.867649 containerd[1510]: 2025-10-30 00:05:17.812 [INFO][4329] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="843345525fffdf6e1553b5fe2a46b4fae14a3250602e71b3a776a6728ae76dce" Namespace="calico-apiserver" Pod="calico-apiserver-7668ff9dd9-jn9tg" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-calico--apiserver--7668ff9dd9--jn9tg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--959986c1c8-k8s-calico--apiserver--7668ff9dd9--jn9tg-eth0", GenerateName:"calico-apiserver-7668ff9dd9-", Namespace:"calico-apiserver", SelfLink:"", UID:"041ed311-1a2e-462d-ace8-65f00add4557", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 4, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7668ff9dd9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-959986c1c8", ContainerID:"843345525fffdf6e1553b5fe2a46b4fae14a3250602e71b3a776a6728ae76dce", Pod:"calico-apiserver-7668ff9dd9-jn9tg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.118.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali62f4abc33c1", MAC:"da:1a:fb:dc:9e:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:05:17.867649 containerd[1510]: 2025-10-30 00:05:17.856 [INFO][4329] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="843345525fffdf6e1553b5fe2a46b4fae14a3250602e71b3a776a6728ae76dce" Namespace="calico-apiserver" Pod="calico-apiserver-7668ff9dd9-jn9tg" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-calico--apiserver--7668ff9dd9--jn9tg-eth0" Oct 30 00:05:17.869954 containerd[1510]: time="2025-10-30T00:05:17.868121405Z" level=info msg="CreateContainer within sandbox \"8332f707b425199a8ab3d273136c072f49d416c28a553494bd75b137fe0659d0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 00:05:17.890701 containerd[1510]: time="2025-10-30T00:05:17.890648995Z" level=info msg="Container 905fdb6a27a02ba73b8c27c756b2c682d86c56841748c80d4f7f2934d4f74935: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:05:17.902654 containerd[1510]: time="2025-10-30T00:05:17.902599006Z" level=info msg="CreateContainer within sandbox \"8332f707b425199a8ab3d273136c072f49d416c28a553494bd75b137fe0659d0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"905fdb6a27a02ba73b8c27c756b2c682d86c56841748c80d4f7f2934d4f74935\"" Oct 30 00:05:17.904511 containerd[1510]: time="2025-10-30T00:05:17.904399233Z" level=info msg="StartContainer for \"905fdb6a27a02ba73b8c27c756b2c682d86c56841748c80d4f7f2934d4f74935\"" Oct 30 00:05:17.905704 containerd[1510]: time="2025-10-30T00:05:17.905661378Z" level=info msg="connecting to shim 905fdb6a27a02ba73b8c27c756b2c682d86c56841748c80d4f7f2934d4f74935" address="unix:///run/containerd/s/70ad38199d86ae391cdd80e3629fe1b619c91f97343e30e2c62fe556c4a01734" protocol=ttrpc version=3 Oct 30 00:05:17.911221 containerd[1510]: time="2025-10-30T00:05:17.911166623Z" level=info msg="connecting to shim 843345525fffdf6e1553b5fe2a46b4fae14a3250602e71b3a776a6728ae76dce" address="unix:///run/containerd/s/4fbbea187a3890125b62720068dd3983c0223b3a0de650e2ac4d1fe1e966a16c" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:05:17.936944 systemd[1]: Started cri-containerd-905fdb6a27a02ba73b8c27c756b2c682d86c56841748c80d4f7f2934d4f74935.scope - libcontainer container 905fdb6a27a02ba73b8c27c756b2c682d86c56841748c80d4f7f2934d4f74935. Oct 30 00:05:17.967597 systemd[1]: Started cri-containerd-843345525fffdf6e1553b5fe2a46b4fae14a3250602e71b3a776a6728ae76dce.scope - libcontainer container 843345525fffdf6e1553b5fe2a46b4fae14a3250602e71b3a776a6728ae76dce. Oct 30 00:05:18.041338 containerd[1510]: time="2025-10-30T00:05:18.041287909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c6b9bd746-st5j9,Uid:d930ac2e-f4f2-4b3f-a87d-015fa72b1a3c,Namespace:calico-system,Attempt:0,} returns sandbox id \"23f7d444e40905c22780909f35f8c8735230d3a2f5c45e086e680459df478d00\"" Oct 30 00:05:18.047026 containerd[1510]: time="2025-10-30T00:05:18.046874796Z" level=info msg="StartContainer for \"905fdb6a27a02ba73b8c27c756b2c682d86c56841748c80d4f7f2934d4f74935\" returns successfully" Oct 30 00:05:18.048910 containerd[1510]: time="2025-10-30T00:05:18.048873649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 00:05:18.093765 kubelet[2689]: E1030 00:05:18.093726 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:05:18.099639 containerd[1510]: time="2025-10-30T00:05:18.098089508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-glxrs,Uid:7402bacb-6343-422b-b5ae-563901a1a2d5,Namespace:kube-system,Attempt:0,}" Oct 30 00:05:18.145251 systemd-networkd[1424]: cali24ba481d457: Gained IPv6LL Oct 30 00:05:18.249496 containerd[1510]: time="2025-10-30T00:05:18.249434419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7668ff9dd9-jn9tg,Uid:041ed311-1a2e-462d-ace8-65f00add4557,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"843345525fffdf6e1553b5fe2a46b4fae14a3250602e71b3a776a6728ae76dce\"" Oct 30 00:05:18.364364 containerd[1510]: time="2025-10-30T00:05:18.364305365Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:18.365991 containerd[1510]: time="2025-10-30T00:05:18.365869137Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 00:05:18.366333 containerd[1510]: time="2025-10-30T00:05:18.366035388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 00:05:18.367330 kubelet[2689]: E1030 00:05:18.367210 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:05:18.367663 kubelet[2689]: E1030 00:05:18.367304 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:05:18.367987 kubelet[2689]: E1030 00:05:18.367833 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pwjpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c6b9bd746-st5j9_calico-system(d930ac2e-f4f2-4b3f-a87d-015fa72b1a3c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:18.368609 containerd[1510]: time="2025-10-30T00:05:18.368569982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:05:18.369410 kubelet[2689]: E1030 00:05:18.369314 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c6b9bd746-st5j9" podUID="d930ac2e-f4f2-4b3f-a87d-015fa72b1a3c" Oct 30 00:05:18.385312 kubelet[2689]: E1030 00:05:18.385236 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:05:18.393035 kubelet[2689]: E1030 00:05:18.392981 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c6b9bd746-st5j9" podUID="d930ac2e-f4f2-4b3f-a87d-015fa72b1a3c" Oct 30 00:05:18.393274 kubelet[2689]: E1030 00:05:18.393050 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7vb2j" podUID="06390243-fcd9-4c68-9f88-5b23f795b967" Oct 30 00:05:18.430707 kubelet[2689]: I1030 00:05:18.429448 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7lqr5" podStartSLOduration=43.42660889 podStartE2EDuration="43.42660889s" podCreationTimestamp="2025-10-30 00:04:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:05:18.419913876 +0000 UTC m=+47.496153105" watchObservedRunningTime="2025-10-30 00:05:18.42660889 +0000 UTC m=+47.502848119" Oct 30 00:05:18.441801 systemd-networkd[1424]: cali6281ab0742d: Link UP Oct 30 00:05:18.448229 systemd-networkd[1424]: cali6281ab0742d: Gained carrier Oct 30 00:05:18.483337 containerd[1510]: 2025-10-30 00:05:18.225 [INFO][4567] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--959986c1c8-k8s-coredns--668d6bf9bc--glxrs-eth0 coredns-668d6bf9bc- kube-system 7402bacb-6343-422b-b5ae-563901a1a2d5 873 0 2025-10-30 00:04:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.1.0-n-959986c1c8 coredns-668d6bf9bc-glxrs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6281ab0742d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="079a82dbde02678e90c013363a5fd21deb6603b1f38a22c27d1121aaa41b5a9a" Namespace="kube-system" Pod="coredns-668d6bf9bc-glxrs" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-coredns--668d6bf9bc--glxrs-" Oct 30 00:05:18.483337 containerd[1510]: 2025-10-30 00:05:18.226 [INFO][4567] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="079a82dbde02678e90c013363a5fd21deb6603b1f38a22c27d1121aaa41b5a9a" Namespace="kube-system" Pod="coredns-668d6bf9bc-glxrs" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-coredns--668d6bf9bc--glxrs-eth0" Oct 30 00:05:18.483337 containerd[1510]: 2025-10-30 00:05:18.298 [INFO][4586] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="079a82dbde02678e90c013363a5fd21deb6603b1f38a22c27d1121aaa41b5a9a" HandleID="k8s-pod-network.079a82dbde02678e90c013363a5fd21deb6603b1f38a22c27d1121aaa41b5a9a" Workload="ci--4459.1.0--n--959986c1c8-k8s-coredns--668d6bf9bc--glxrs-eth0" Oct 30 00:05:18.483337 containerd[1510]: 2025-10-30 00:05:18.299 [INFO][4586] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="079a82dbde02678e90c013363a5fd21deb6603b1f38a22c27d1121aaa41b5a9a" HandleID="k8s-pod-network.079a82dbde02678e90c013363a5fd21deb6603b1f38a22c27d1121aaa41b5a9a" Workload="ci--4459.1.0--n--959986c1c8-k8s-coredns--668d6bf9bc--glxrs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f8d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.1.0-n-959986c1c8", "pod":"coredns-668d6bf9bc-glxrs", "timestamp":"2025-10-30 00:05:18.298292284 +0000 UTC"}, Hostname:"ci-4459.1.0-n-959986c1c8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:05:18.483337 containerd[1510]: 2025-10-30 00:05:18.299 [INFO][4586] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:05:18.483337 containerd[1510]: 2025-10-30 00:05:18.299 [INFO][4586] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:05:18.483337 containerd[1510]: 2025-10-30 00:05:18.299 [INFO][4586] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-959986c1c8' Oct 30 00:05:18.483337 containerd[1510]: 2025-10-30 00:05:18.320 [INFO][4586] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.079a82dbde02678e90c013363a5fd21deb6603b1f38a22c27d1121aaa41b5a9a" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:18.483337 containerd[1510]: 2025-10-30 00:05:18.346 [INFO][4586] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:18.483337 containerd[1510]: 2025-10-30 00:05:18.357 [INFO][4586] ipam/ipam.go 511: Trying affinity for 192.168.118.128/26 host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:18.483337 containerd[1510]: 2025-10-30 00:05:18.360 [INFO][4586] ipam/ipam.go 158: Attempting to load block cidr=192.168.118.128/26 host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:18.483337 containerd[1510]: 2025-10-30 00:05:18.365 [INFO][4586] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.118.128/26 host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:18.483337 containerd[1510]: 2025-10-30 00:05:18.365 [INFO][4586] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.118.128/26 handle="k8s-pod-network.079a82dbde02678e90c013363a5fd21deb6603b1f38a22c27d1121aaa41b5a9a" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:18.483337 containerd[1510]: 2025-10-30 00:05:18.372 [INFO][4586] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.079a82dbde02678e90c013363a5fd21deb6603b1f38a22c27d1121aaa41b5a9a Oct 30 00:05:18.483337 containerd[1510]: 2025-10-30 00:05:18.382 [INFO][4586] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.118.128/26 handle="k8s-pod-network.079a82dbde02678e90c013363a5fd21deb6603b1f38a22c27d1121aaa41b5a9a" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:18.483337 containerd[1510]: 2025-10-30 00:05:18.423 [INFO][4586] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.118.136/26] block=192.168.118.128/26 handle="k8s-pod-network.079a82dbde02678e90c013363a5fd21deb6603b1f38a22c27d1121aaa41b5a9a" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:18.483337 containerd[1510]: 2025-10-30 00:05:18.423 [INFO][4586] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.118.136/26] handle="k8s-pod-network.079a82dbde02678e90c013363a5fd21deb6603b1f38a22c27d1121aaa41b5a9a" host="ci-4459.1.0-n-959986c1c8" Oct 30 00:05:18.483337 containerd[1510]: 2025-10-30 00:05:18.423 [INFO][4586] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:05:18.483337 containerd[1510]: 2025-10-30 00:05:18.423 [INFO][4586] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.118.136/26] IPv6=[] ContainerID="079a82dbde02678e90c013363a5fd21deb6603b1f38a22c27d1121aaa41b5a9a" HandleID="k8s-pod-network.079a82dbde02678e90c013363a5fd21deb6603b1f38a22c27d1121aaa41b5a9a" Workload="ci--4459.1.0--n--959986c1c8-k8s-coredns--668d6bf9bc--glxrs-eth0" Oct 30 00:05:18.485125 containerd[1510]: 2025-10-30 00:05:18.429 [INFO][4567] cni-plugin/k8s.go 418: Populated endpoint ContainerID="079a82dbde02678e90c013363a5fd21deb6603b1f38a22c27d1121aaa41b5a9a" Namespace="kube-system" Pod="coredns-668d6bf9bc-glxrs" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-coredns--668d6bf9bc--glxrs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--959986c1c8-k8s-coredns--668d6bf9bc--glxrs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7402bacb-6343-422b-b5ae-563901a1a2d5", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 4, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-959986c1c8", ContainerID:"", Pod:"coredns-668d6bf9bc-glxrs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6281ab0742d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:05:18.485125 containerd[1510]: 2025-10-30 00:05:18.430 [INFO][4567] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.118.136/32] ContainerID="079a82dbde02678e90c013363a5fd21deb6603b1f38a22c27d1121aaa41b5a9a" Namespace="kube-system" Pod="coredns-668d6bf9bc-glxrs" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-coredns--668d6bf9bc--glxrs-eth0" Oct 30 00:05:18.485125 containerd[1510]: 2025-10-30 00:05:18.430 [INFO][4567] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6281ab0742d ContainerID="079a82dbde02678e90c013363a5fd21deb6603b1f38a22c27d1121aaa41b5a9a" Namespace="kube-system" Pod="coredns-668d6bf9bc-glxrs" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-coredns--668d6bf9bc--glxrs-eth0" Oct 30 00:05:18.485125 containerd[1510]: 2025-10-30 00:05:18.449 [INFO][4567] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="079a82dbde02678e90c013363a5fd21deb6603b1f38a22c27d1121aaa41b5a9a" Namespace="kube-system" Pod="coredns-668d6bf9bc-glxrs" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-coredns--668d6bf9bc--glxrs-eth0" Oct 30 00:05:18.485125 containerd[1510]: 2025-10-30 00:05:18.449 [INFO][4567] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="079a82dbde02678e90c013363a5fd21deb6603b1f38a22c27d1121aaa41b5a9a" Namespace="kube-system" Pod="coredns-668d6bf9bc-glxrs" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-coredns--668d6bf9bc--glxrs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--959986c1c8-k8s-coredns--668d6bf9bc--glxrs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7402bacb-6343-422b-b5ae-563901a1a2d5", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 4, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-959986c1c8", ContainerID:"079a82dbde02678e90c013363a5fd21deb6603b1f38a22c27d1121aaa41b5a9a", Pod:"coredns-668d6bf9bc-glxrs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6281ab0742d", MAC:"0e:fc:2e:6b:2d:08", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:05:18.485125 containerd[1510]: 2025-10-30 00:05:18.478 [INFO][4567] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="079a82dbde02678e90c013363a5fd21deb6603b1f38a22c27d1121aaa41b5a9a" Namespace="kube-system" Pod="coredns-668d6bf9bc-glxrs" WorkloadEndpoint="ci--4459.1.0--n--959986c1c8-k8s-coredns--668d6bf9bc--glxrs-eth0" Oct 30 00:05:18.542673 containerd[1510]: time="2025-10-30T00:05:18.542592353Z" level=info msg="connecting to shim 079a82dbde02678e90c013363a5fd21deb6603b1f38a22c27d1121aaa41b5a9a" address="unix:///run/containerd/s/b3b54bc2b2788963261f1e7de6e00939865ea78b23baaea8d6813e1f98c3b081" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:05:18.593787 systemd[1]: Started cri-containerd-079a82dbde02678e90c013363a5fd21deb6603b1f38a22c27d1121aaa41b5a9a.scope - libcontainer container 079a82dbde02678e90c013363a5fd21deb6603b1f38a22c27d1121aaa41b5a9a. Oct 30 00:05:18.676437 containerd[1510]: time="2025-10-30T00:05:18.676383638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-glxrs,Uid:7402bacb-6343-422b-b5ae-563901a1a2d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"079a82dbde02678e90c013363a5fd21deb6603b1f38a22c27d1121aaa41b5a9a\"" Oct 30 00:05:18.678406 kubelet[2689]: E1030 00:05:18.678372 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:05:18.682442 containerd[1510]: time="2025-10-30T00:05:18.682318200Z" level=info msg="CreateContainer within sandbox \"079a82dbde02678e90c013363a5fd21deb6603b1f38a22c27d1121aaa41b5a9a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 00:05:18.697167 containerd[1510]: time="2025-10-30T00:05:18.696787742Z" level=info msg="Container 67e77ef2ac61d3de8c3e865349888a804a19a7606869e50b1e478dd762505a8f: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:05:18.706839 containerd[1510]: time="2025-10-30T00:05:18.706455736Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:18.707851 containerd[1510]: time="2025-10-30T00:05:18.707632316Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:05:18.708733 containerd[1510]: time="2025-10-30T00:05:18.707884508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:05:18.709561 kubelet[2689]: E1030 00:05:18.709136 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:05:18.709648 containerd[1510]: time="2025-10-30T00:05:18.709387323Z" level=info msg="CreateContainer within sandbox \"079a82dbde02678e90c013363a5fd21deb6603b1f38a22c27d1121aaa41b5a9a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"67e77ef2ac61d3de8c3e865349888a804a19a7606869e50b1e478dd762505a8f\"" Oct 30 00:05:18.711025 kubelet[2689]: E1030 00:05:18.709703 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:05:18.711025 kubelet[2689]: E1030 00:05:18.709861 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tqvlp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7668ff9dd9-jn9tg_calico-apiserver(041ed311-1a2e-462d-ace8-65f00add4557): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:18.711416 containerd[1510]: time="2025-10-30T00:05:18.711386973Z" level=info msg="StartContainer for \"67e77ef2ac61d3de8c3e865349888a804a19a7606869e50b1e478dd762505a8f\"" Oct 30 00:05:18.712125 kubelet[2689]: E1030 00:05:18.711601 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7668ff9dd9-jn9tg" podUID="041ed311-1a2e-462d-ace8-65f00add4557" Oct 30 00:05:18.713130 containerd[1510]: time="2025-10-30T00:05:18.712563892Z" level=info msg="connecting to shim 67e77ef2ac61d3de8c3e865349888a804a19a7606869e50b1e478dd762505a8f" address="unix:///run/containerd/s/b3b54bc2b2788963261f1e7de6e00939865ea78b23baaea8d6813e1f98c3b081" protocol=ttrpc version=3 Oct 30 00:05:18.745421 systemd[1]: Started cri-containerd-67e77ef2ac61d3de8c3e865349888a804a19a7606869e50b1e478dd762505a8f.scope - libcontainer container 67e77ef2ac61d3de8c3e865349888a804a19a7606869e50b1e478dd762505a8f. Oct 30 00:05:18.813502 containerd[1510]: time="2025-10-30T00:05:18.813435092Z" level=info msg="StartContainer for \"67e77ef2ac61d3de8c3e865349888a804a19a7606869e50b1e478dd762505a8f\" returns successfully" Oct 30 00:05:18.978127 systemd-networkd[1424]: cali3e620e42025: Gained IPv6LL Oct 30 00:05:19.112408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1387135947.mount: Deactivated successfully. Oct 30 00:05:19.394569 kubelet[2689]: E1030 00:05:19.394521 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:05:19.398872 kubelet[2689]: E1030 00:05:19.397806 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:05:19.399576 kubelet[2689]: E1030 00:05:19.399523 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c6b9bd746-st5j9" podUID="d930ac2e-f4f2-4b3f-a87d-015fa72b1a3c" Oct 30 00:05:19.400518 kubelet[2689]: E1030 00:05:19.400337 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7668ff9dd9-jn9tg" podUID="041ed311-1a2e-462d-ace8-65f00add4557" Oct 30 00:05:19.425408 systemd-networkd[1424]: cali62f4abc33c1: Gained IPv6LL Oct 30 00:05:19.489435 systemd-networkd[1424]: cali0cd2de2552a: Gained IPv6LL Oct 30 00:05:19.505054 kubelet[2689]: I1030 00:05:19.504963 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-glxrs" podStartSLOduration=44.504932814 podStartE2EDuration="44.504932814s" podCreationTimestamp="2025-10-30 00:04:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:05:19.482843737 +0000 UTC m=+48.559082969" watchObservedRunningTime="2025-10-30 00:05:19.504932814 +0000 UTC m=+48.581172060" Oct 30 00:05:20.385681 systemd-networkd[1424]: cali6281ab0742d: Gained IPv6LL Oct 30 00:05:20.397936 kubelet[2689]: E1030 00:05:20.397461 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:05:20.397936 kubelet[2689]: E1030 00:05:20.397868 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:05:21.401053 kubelet[2689]: E1030 00:05:21.400770 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:05:29.097206 containerd[1510]: time="2025-10-30T00:05:29.096772641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:05:29.459903 containerd[1510]: time="2025-10-30T00:05:29.459592881Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:29.461786 containerd[1510]: time="2025-10-30T00:05:29.461509499Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:05:29.461786 containerd[1510]: time="2025-10-30T00:05:29.461567594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:05:29.462354 kubelet[2689]: E1030 00:05:29.462247 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:05:29.462354 kubelet[2689]: E1030 00:05:29.462324 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:05:29.463531 kubelet[2689]: E1030 00:05:29.462517 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-48np5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7668ff9dd9-98c6b_calico-apiserver(7fdc9fa2-26e8-4238-9f3b-2e6c25ad7e62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:29.464047 kubelet[2689]: E1030 00:05:29.463900 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7668ff9dd9-98c6b" podUID="7fdc9fa2-26e8-4238-9f3b-2e6c25ad7e62" Oct 30 00:05:30.098543 containerd[1510]: time="2025-10-30T00:05:30.098422027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 00:05:30.113586 systemd[1]: Started sshd@7-147.182.197.56:22-139.178.89.65:43356.service - OpenSSH per-connection server daemon (139.178.89.65:43356). Oct 30 00:05:30.254698 sshd[4710]: Accepted publickey for core from 139.178.89.65 port 43356 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:05:30.256645 sshd-session[4710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:05:30.264046 systemd-logind[1479]: New session 8 of user core. Oct 30 00:05:30.274652 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 30 00:05:30.425880 containerd[1510]: time="2025-10-30T00:05:30.425625974Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:30.427115 containerd[1510]: time="2025-10-30T00:05:30.427041395Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 00:05:30.427348 containerd[1510]: time="2025-10-30T00:05:30.427073457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 00:05:30.427985 kubelet[2689]: E1030 00:05:30.427719 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:05:30.428278 kubelet[2689]: E1030 00:05:30.428133 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:05:30.430001 kubelet[2689]: E1030 00:05:30.429324 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxlc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wtvtg_calico-system(7ce87a9a-4a9f-4e2a-b7f9-1e809a938d71): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:30.430775 kubelet[2689]: E1030 00:05:30.430656 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wtvtg" podUID="7ce87a9a-4a9f-4e2a-b7f9-1e809a938d71" Oct 30 00:05:30.576147 sshd[4713]: Connection closed by 139.178.89.65 port 43356 Oct 30 00:05:30.576774 sshd-session[4710]: pam_unix(sshd:session): session closed for user core Oct 30 00:05:30.585344 systemd[1]: sshd@7-147.182.197.56:22-139.178.89.65:43356.service: Deactivated successfully. Oct 30 00:05:30.587944 systemd[1]: session-8.scope: Deactivated successfully. Oct 30 00:05:30.589381 systemd-logind[1479]: Session 8 logged out. Waiting for processes to exit. Oct 30 00:05:30.592140 systemd-logind[1479]: Removed session 8. Oct 30 00:05:31.096551 containerd[1510]: time="2025-10-30T00:05:31.096502044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 00:05:31.503742 containerd[1510]: time="2025-10-30T00:05:31.503580637Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:31.505285 containerd[1510]: time="2025-10-30T00:05:31.505226601Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 00:05:31.505536 containerd[1510]: time="2025-10-30T00:05:31.505418173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 00:05:31.505847 kubelet[2689]: E1030 00:05:31.505793 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:05:31.506200 kubelet[2689]: E1030 00:05:31.505856 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:05:31.506200 kubelet[2689]: E1030 00:05:31.506117 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pwjpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c6b9bd746-st5j9_calico-system(d930ac2e-f4f2-4b3f-a87d-015fa72b1a3c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:31.507351 kubelet[2689]: E1030 00:05:31.507286 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c6b9bd746-st5j9" podUID="d930ac2e-f4f2-4b3f-a87d-015fa72b1a3c" Oct 30 00:05:32.097700 containerd[1510]: time="2025-10-30T00:05:32.097429960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 00:05:32.513148 containerd[1510]: time="2025-10-30T00:05:32.512984311Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:32.514522 containerd[1510]: time="2025-10-30T00:05:32.514403794Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 00:05:32.514522 containerd[1510]: time="2025-10-30T00:05:32.514454503Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 00:05:32.515125 kubelet[2689]: E1030 00:05:32.514895 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:05:32.515125 kubelet[2689]: E1030 00:05:32.514947 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:05:32.515828 containerd[1510]: time="2025-10-30T00:05:32.515791099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 00:05:32.516189 kubelet[2689]: E1030 00:05:32.516122 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x52zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7vb2j_calico-system(06390243-fcd9-4c68-9f88-5b23f795b967): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:32.851360 containerd[1510]: time="2025-10-30T00:05:32.851287458Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:32.852713 containerd[1510]: time="2025-10-30T00:05:32.852584699Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 00:05:32.852713 containerd[1510]: time="2025-10-30T00:05:32.852644623Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 00:05:32.853063 kubelet[2689]: E1030 00:05:32.853010 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:05:32.853268 kubelet[2689]: E1030 00:05:32.853245 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:05:32.853770 kubelet[2689]: E1030 00:05:32.853568 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:55498d7b2df74b079d072fc32427c68e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-grsrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-548d886cd6-g6b4q_calico-system(05f3fe96-a4e2-497a-aa78-f94004b3a92a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:32.854330 containerd[1510]: time="2025-10-30T00:05:32.854183429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 00:05:33.223956 containerd[1510]: time="2025-10-30T00:05:33.223773852Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:33.251522 containerd[1510]: time="2025-10-30T00:05:33.225257807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 00:05:33.251522 containerd[1510]: time="2025-10-30T00:05:33.225444260Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 00:05:33.251846 kubelet[2689]: E1030 00:05:33.251787 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:05:33.252061 kubelet[2689]: E1030 00:05:33.251869 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:05:33.252398 kubelet[2689]: E1030 00:05:33.252280 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x52zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7vb2j_calico-system(06390243-fcd9-4c68-9f88-5b23f795b967): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:33.252911 containerd[1510]: time="2025-10-30T00:05:33.252799503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 00:05:33.253784 kubelet[2689]: E1030 00:05:33.253712 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7vb2j" podUID="06390243-fcd9-4c68-9f88-5b23f795b967" Oct 30 00:05:33.608362 containerd[1510]: time="2025-10-30T00:05:33.608284562Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:33.617029 containerd[1510]: time="2025-10-30T00:05:33.616846691Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 00:05:33.617029 containerd[1510]: time="2025-10-30T00:05:33.616933767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 00:05:33.617690 kubelet[2689]: E1030 00:05:33.617129 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:05:33.617690 kubelet[2689]: E1030 00:05:33.617178 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:05:33.617690 kubelet[2689]: E1030 00:05:33.617443 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-grsrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-548d886cd6-g6b4q_calico-system(05f3fe96-a4e2-497a-aa78-f94004b3a92a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:33.618739 containerd[1510]: time="2025-10-30T00:05:33.617963106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:05:33.619353 kubelet[2689]: E1030 00:05:33.619207 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548d886cd6-g6b4q" podUID="05f3fe96-a4e2-497a-aa78-f94004b3a92a" Oct 30 00:05:33.976934 containerd[1510]: time="2025-10-30T00:05:33.976514995Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:33.979084 containerd[1510]: time="2025-10-30T00:05:33.978223906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:05:33.979483 containerd[1510]: time="2025-10-30T00:05:33.979064733Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:05:33.980149 kubelet[2689]: E1030 00:05:33.979753 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:05:33.980149 kubelet[2689]: E1030 00:05:33.979828 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:05:33.980149 kubelet[2689]: E1030 00:05:33.980027 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tqvlp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7668ff9dd9-jn9tg_calico-apiserver(041ed311-1a2e-462d-ace8-65f00add4557): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:33.981778 kubelet[2689]: E1030 00:05:33.981710 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7668ff9dd9-jn9tg" podUID="041ed311-1a2e-462d-ace8-65f00add4557" Oct 30 00:05:35.593496 systemd[1]: Started sshd@8-147.182.197.56:22-139.178.89.65:43358.service - OpenSSH per-connection server daemon (139.178.89.65:43358). Oct 30 00:05:35.673164 sshd[4729]: Accepted publickey for core from 139.178.89.65 port 43358 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:05:35.674836 sshd-session[4729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:05:35.681520 systemd-logind[1479]: New session 9 of user core. Oct 30 00:05:35.693448 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 30 00:05:35.892676 sshd[4732]: Connection closed by 139.178.89.65 port 43358 Oct 30 00:05:35.894426 sshd-session[4729]: pam_unix(sshd:session): session closed for user core Oct 30 00:05:35.899927 systemd-logind[1479]: Session 9 logged out. Waiting for processes to exit. Oct 30 00:05:35.900694 systemd[1]: sshd@8-147.182.197.56:22-139.178.89.65:43358.service: Deactivated successfully. Oct 30 00:05:35.904314 systemd[1]: session-9.scope: Deactivated successfully. Oct 30 00:05:35.908061 systemd-logind[1479]: Removed session 9. Oct 30 00:05:39.095926 kubelet[2689]: E1030 00:05:39.095869 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:05:40.907322 systemd[1]: Started sshd@9-147.182.197.56:22-139.178.89.65:52982.service - OpenSSH per-connection server daemon (139.178.89.65:52982). Oct 30 00:05:41.001977 sshd[4754]: Accepted publickey for core from 139.178.89.65 port 52982 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:05:41.003743 sshd-session[4754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:05:41.011016 systemd-logind[1479]: New session 10 of user core. Oct 30 00:05:41.019401 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 30 00:05:41.099058 kubelet[2689]: E1030 00:05:41.099009 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wtvtg" podUID="7ce87a9a-4a9f-4e2a-b7f9-1e809a938d71" Oct 30 00:05:41.194791 sshd[4757]: Connection closed by 139.178.89.65 port 52982 Oct 30 00:05:41.195465 sshd-session[4754]: pam_unix(sshd:session): session closed for user core Oct 30 00:05:41.206352 systemd[1]: sshd@9-147.182.197.56:22-139.178.89.65:52982.service: Deactivated successfully. Oct 30 00:05:41.209704 systemd[1]: session-10.scope: Deactivated successfully. Oct 30 00:05:41.212511 systemd-logind[1479]: Session 10 logged out. Waiting for processes to exit. Oct 30 00:05:41.218308 systemd-logind[1479]: Removed session 10. Oct 30 00:05:41.219121 systemd[1]: Started sshd@10-147.182.197.56:22-139.178.89.65:52990.service - OpenSSH per-connection server daemon (139.178.89.65:52990). Oct 30 00:05:41.288630 sshd[4770]: Accepted publickey for core from 139.178.89.65 port 52990 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:05:41.291165 sshd-session[4770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:05:41.299607 systemd-logind[1479]: New session 11 of user core. Oct 30 00:05:41.304374 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 30 00:05:41.503321 sshd[4773]: Connection closed by 139.178.89.65 port 52990 Oct 30 00:05:41.504279 sshd-session[4770]: pam_unix(sshd:session): session closed for user core Oct 30 00:05:41.515581 systemd[1]: sshd@10-147.182.197.56:22-139.178.89.65:52990.service: Deactivated successfully. Oct 30 00:05:41.520886 systemd[1]: session-11.scope: Deactivated successfully. Oct 30 00:05:41.524341 systemd-logind[1479]: Session 11 logged out. Waiting for processes to exit. Oct 30 00:05:41.530090 systemd[1]: Started sshd@11-147.182.197.56:22-139.178.89.65:52994.service - OpenSSH per-connection server daemon (139.178.89.65:52994). Oct 30 00:05:41.536329 systemd-logind[1479]: Removed session 11. Oct 30 00:05:41.619252 sshd[4783]: Accepted publickey for core from 139.178.89.65 port 52994 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:05:41.621202 sshd-session[4783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:05:41.629874 systemd-logind[1479]: New session 12 of user core. Oct 30 00:05:41.638405 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 30 00:05:41.798689 sshd[4786]: Connection closed by 139.178.89.65 port 52994 Oct 30 00:05:41.797306 sshd-session[4783]: pam_unix(sshd:session): session closed for user core Oct 30 00:05:41.801450 systemd[1]: sshd@11-147.182.197.56:22-139.178.89.65:52994.service: Deactivated successfully. Oct 30 00:05:41.804638 systemd[1]: session-12.scope: Deactivated successfully. Oct 30 00:05:41.806462 systemd-logind[1479]: Session 12 logged out. Waiting for processes to exit. Oct 30 00:05:41.809064 systemd-logind[1479]: Removed session 12. Oct 30 00:05:42.094580 kubelet[2689]: E1030 00:05:42.094476 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:05:43.096319 kubelet[2689]: E1030 00:05:43.095971 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7668ff9dd9-98c6b" podUID="7fdc9fa2-26e8-4238-9f3b-2e6c25ad7e62" Oct 30 00:05:45.482666 containerd[1510]: time="2025-10-30T00:05:45.482613227Z" level=info msg="TaskExit event in podsandbox handler container_id:\"17071a5081fb294960f3e2df3891f176f5a1638eaf47c0f1804b1b5326b86aed\" id:\"efaa57c4b82a66f326a9231c98cb26e4e40eb3100fa2206405b2eba6cd9d2693\" pid:4811 exited_at:{seconds:1761782745 nanos:481770065}" Oct 30 00:05:45.485716 kubelet[2689]: E1030 00:05:45.485681 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:05:46.095611 kubelet[2689]: E1030 00:05:46.095471 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7668ff9dd9-jn9tg" podUID="041ed311-1a2e-462d-ace8-65f00add4557" Oct 30 00:05:46.817012 systemd[1]: Started sshd@12-147.182.197.56:22-139.178.89.65:57086.service - OpenSSH per-connection server daemon (139.178.89.65:57086). Oct 30 00:05:46.921698 sshd[4825]: Accepted publickey for core from 139.178.89.65 port 57086 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:05:46.924004 sshd-session[4825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:05:46.929521 systemd-logind[1479]: New session 13 of user core. Oct 30 00:05:46.938714 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 30 00:05:47.100733 kubelet[2689]: E1030 00:05:47.100582 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c6b9bd746-st5j9" podUID="d930ac2e-f4f2-4b3f-a87d-015fa72b1a3c" Oct 30 00:05:47.103825 kubelet[2689]: E1030 00:05:47.103783 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7vb2j" podUID="06390243-fcd9-4c68-9f88-5b23f795b967" Oct 30 00:05:47.179690 sshd[4828]: Connection closed by 139.178.89.65 port 57086 Oct 30 00:05:47.180594 sshd-session[4825]: pam_unix(sshd:session): session closed for user core Oct 30 00:05:47.185139 systemd[1]: sshd@12-147.182.197.56:22-139.178.89.65:57086.service: Deactivated successfully. Oct 30 00:05:47.187700 systemd[1]: session-13.scope: Deactivated successfully. Oct 30 00:05:47.190982 systemd-logind[1479]: Session 13 logged out. Waiting for processes to exit. Oct 30 00:05:47.192788 systemd-logind[1479]: Removed session 13. Oct 30 00:05:49.099031 kubelet[2689]: E1030 00:05:49.098979 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548d886cd6-g6b4q" podUID="05f3fe96-a4e2-497a-aa78-f94004b3a92a" Oct 30 00:05:52.199982 systemd[1]: Started sshd@13-147.182.197.56:22-139.178.89.65:57100.service - OpenSSH per-connection server daemon (139.178.89.65:57100). Oct 30 00:05:52.283057 sshd[4845]: Accepted publickey for core from 139.178.89.65 port 57100 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:05:52.285295 sshd-session[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:05:52.292617 systemd-logind[1479]: New session 14 of user core. Oct 30 00:05:52.299736 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 30 00:05:52.536987 sshd[4848]: Connection closed by 139.178.89.65 port 57100 Oct 30 00:05:52.538715 sshd-session[4845]: pam_unix(sshd:session): session closed for user core Oct 30 00:05:52.543813 systemd[1]: sshd@13-147.182.197.56:22-139.178.89.65:57100.service: Deactivated successfully. Oct 30 00:05:52.545900 systemd[1]: session-14.scope: Deactivated successfully. Oct 30 00:05:52.547019 systemd-logind[1479]: Session 14 logged out. Waiting for processes to exit. Oct 30 00:05:52.548639 systemd-logind[1479]: Removed session 14. Oct 30 00:05:54.097632 containerd[1510]: time="2025-10-30T00:05:54.097488467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 00:05:54.449600 containerd[1510]: time="2025-10-30T00:05:54.449278180Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:54.450566 containerd[1510]: time="2025-10-30T00:05:54.450487525Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 00:05:54.450710 containerd[1510]: time="2025-10-30T00:05:54.450618529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 00:05:54.451452 kubelet[2689]: E1030 00:05:54.451389 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:05:54.452297 kubelet[2689]: E1030 00:05:54.451464 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:05:54.452297 kubelet[2689]: E1030 00:05:54.451663 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxlc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wtvtg_calico-system(7ce87a9a-4a9f-4e2a-b7f9-1e809a938d71): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:54.453349 kubelet[2689]: E1030 00:05:54.453300 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wtvtg" podUID="7ce87a9a-4a9f-4e2a-b7f9-1e809a938d71" Oct 30 00:05:55.096030 kubelet[2689]: E1030 00:05:55.095969 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:05:56.096753 containerd[1510]: time="2025-10-30T00:05:56.095942442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:05:56.415240 containerd[1510]: time="2025-10-30T00:05:56.415003978Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:56.416119 containerd[1510]: time="2025-10-30T00:05:56.415981277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:05:56.416119 containerd[1510]: time="2025-10-30T00:05:56.415987334Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:05:56.416553 kubelet[2689]: E1030 00:05:56.416490 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:05:56.417082 kubelet[2689]: E1030 00:05:56.416553 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:05:56.417082 kubelet[2689]: E1030 00:05:56.416706 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-48np5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7668ff9dd9-98c6b_calico-apiserver(7fdc9fa2-26e8-4238-9f3b-2e6c25ad7e62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:56.418326 kubelet[2689]: E1030 00:05:56.417941 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7668ff9dd9-98c6b" podUID="7fdc9fa2-26e8-4238-9f3b-2e6c25ad7e62" Oct 30 00:05:57.557368 systemd[1]: Started sshd@14-147.182.197.56:22-139.178.89.65:55158.service - OpenSSH per-connection server daemon (139.178.89.65:55158). Oct 30 00:05:57.679026 sshd[4867]: Accepted publickey for core from 139.178.89.65 port 55158 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:05:57.681162 sshd-session[4867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:05:57.690325 systemd-logind[1479]: New session 15 of user core. Oct 30 00:05:57.700811 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 30 00:05:58.009657 sshd[4870]: Connection closed by 139.178.89.65 port 55158 Oct 30 00:05:58.008514 sshd-session[4867]: pam_unix(sshd:session): session closed for user core Oct 30 00:05:58.013813 systemd[1]: sshd@14-147.182.197.56:22-139.178.89.65:55158.service: Deactivated successfully. Oct 30 00:05:58.016971 systemd[1]: session-15.scope: Deactivated successfully. Oct 30 00:05:58.018816 systemd-logind[1479]: Session 15 logged out. Waiting for processes to exit. Oct 30 00:05:58.021044 systemd-logind[1479]: Removed session 15. Oct 30 00:05:58.096947 containerd[1510]: time="2025-10-30T00:05:58.096853268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:05:58.427252 containerd[1510]: time="2025-10-30T00:05:58.427197600Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:58.428247 containerd[1510]: time="2025-10-30T00:05:58.428191932Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:05:58.428619 containerd[1510]: time="2025-10-30T00:05:58.428290752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:05:58.428673 kubelet[2689]: E1030 00:05:58.428512 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:05:58.428673 kubelet[2689]: E1030 00:05:58.428573 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:05:58.429437 kubelet[2689]: E1030 00:05:58.429371 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tqvlp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7668ff9dd9-jn9tg_calico-apiserver(041ed311-1a2e-462d-ace8-65f00add4557): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:58.430627 kubelet[2689]: E1030 00:05:58.430585 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7668ff9dd9-jn9tg" podUID="041ed311-1a2e-462d-ace8-65f00add4557" Oct 30 00:05:59.102806 containerd[1510]: time="2025-10-30T00:05:59.102741406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 00:05:59.487512 containerd[1510]: time="2025-10-30T00:05:59.487335523Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:59.488418 containerd[1510]: time="2025-10-30T00:05:59.488312727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 00:05:59.488808 containerd[1510]: time="2025-10-30T00:05:59.488757003Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 00:05:59.489139 kubelet[2689]: E1030 00:05:59.489072 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:05:59.489498 kubelet[2689]: E1030 00:05:59.489157 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:05:59.489498 kubelet[2689]: E1030 00:05:59.489307 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x52zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7vb2j_calico-system(06390243-fcd9-4c68-9f88-5b23f795b967): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:59.491720 containerd[1510]: time="2025-10-30T00:05:59.491687930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 00:05:59.835201 containerd[1510]: time="2025-10-30T00:05:59.834716008Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:59.836062 containerd[1510]: time="2025-10-30T00:05:59.835863378Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 00:05:59.836062 containerd[1510]: time="2025-10-30T00:05:59.835909880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 00:05:59.836619 kubelet[2689]: E1030 00:05:59.836519 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:05:59.836619 kubelet[2689]: E1030 00:05:59.836592 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:05:59.836931 kubelet[2689]: E1030 00:05:59.836891 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x52zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7vb2j_calico-system(06390243-fcd9-4c68-9f88-5b23f795b967): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:59.839762 kubelet[2689]: E1030 00:05:59.839702 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7vb2j" podUID="06390243-fcd9-4c68-9f88-5b23f795b967" Oct 30 00:06:00.096984 kubelet[2689]: E1030 00:06:00.096825 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:06:00.100952 containerd[1510]: time="2025-10-30T00:06:00.100872491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 00:06:00.436187 containerd[1510]: time="2025-10-30T00:06:00.435872242Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:06:00.437447 containerd[1510]: time="2025-10-30T00:06:00.437053074Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 00:06:00.437599 containerd[1510]: time="2025-10-30T00:06:00.437580093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 00:06:00.439276 kubelet[2689]: E1030 00:06:00.439225 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:06:00.439458 kubelet[2689]: E1030 00:06:00.439435 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:06:00.439660 kubelet[2689]: E1030 00:06:00.439616 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pwjpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c6b9bd746-st5j9_calico-system(d930ac2e-f4f2-4b3f-a87d-015fa72b1a3c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 00:06:00.441384 kubelet[2689]: E1030 00:06:00.441346 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c6b9bd746-st5j9" podUID="d930ac2e-f4f2-4b3f-a87d-015fa72b1a3c" Oct 30 00:06:03.026472 systemd[1]: Started sshd@15-147.182.197.56:22-139.178.89.65:55170.service - OpenSSH per-connection server daemon (139.178.89.65:55170). Oct 30 00:06:03.099420 containerd[1510]: time="2025-10-30T00:06:03.098616904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 00:06:03.102953 sshd[4884]: Accepted publickey for core from 139.178.89.65 port 55170 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:06:03.102787 sshd-session[4884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:06:03.114832 systemd-logind[1479]: New session 16 of user core. Oct 30 00:06:03.119693 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 30 00:06:03.339332 sshd[4887]: Connection closed by 139.178.89.65 port 55170 Oct 30 00:06:03.341367 sshd-session[4884]: pam_unix(sshd:session): session closed for user core Oct 30 00:06:03.357866 systemd[1]: sshd@15-147.182.197.56:22-139.178.89.65:55170.service: Deactivated successfully. Oct 30 00:06:03.365148 systemd[1]: session-16.scope: Deactivated successfully. Oct 30 00:06:03.366727 systemd-logind[1479]: Session 16 logged out. Waiting for processes to exit. Oct 30 00:06:03.374656 systemd[1]: Started sshd@16-147.182.197.56:22-139.178.89.65:55172.service - OpenSSH per-connection server daemon (139.178.89.65:55172). Oct 30 00:06:03.378058 systemd-logind[1479]: Removed session 16. Oct 30 00:06:03.453916 containerd[1510]: time="2025-10-30T00:06:03.453773265Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:06:03.455626 containerd[1510]: time="2025-10-30T00:06:03.455487432Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 00:06:03.455626 containerd[1510]: time="2025-10-30T00:06:03.455579477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 00:06:03.455945 kubelet[2689]: E1030 00:06:03.455765 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:06:03.455945 kubelet[2689]: E1030 00:06:03.455834 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:06:03.456917 kubelet[2689]: E1030 00:06:03.455988 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:55498d7b2df74b079d072fc32427c68e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-grsrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-548d886cd6-g6b4q_calico-system(05f3fe96-a4e2-497a-aa78-f94004b3a92a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 00:06:03.460503 containerd[1510]: time="2025-10-30T00:06:03.460175433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 00:06:03.480717 sshd[4899]: Accepted publickey for core from 139.178.89.65 port 55172 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:06:03.484329 sshd-session[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:06:03.492254 systemd-logind[1479]: New session 17 of user core. Oct 30 00:06:03.501763 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 30 00:06:03.822967 containerd[1510]: time="2025-10-30T00:06:03.822920079Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:06:03.824079 containerd[1510]: time="2025-10-30T00:06:03.824020930Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 00:06:03.824215 containerd[1510]: time="2025-10-30T00:06:03.824154547Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 00:06:03.824595 kubelet[2689]: E1030 00:06:03.824548 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:06:03.824986 kubelet[2689]: E1030 00:06:03.824610 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:06:03.824986 kubelet[2689]: E1030 00:06:03.824740 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-grsrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-548d886cd6-g6b4q_calico-system(05f3fe96-a4e2-497a-aa78-f94004b3a92a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 00:06:03.827186 kubelet[2689]: E1030 00:06:03.826230 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548d886cd6-g6b4q" podUID="05f3fe96-a4e2-497a-aa78-f94004b3a92a" Oct 30 00:06:03.839770 sshd[4902]: Connection closed by 139.178.89.65 port 55172 Oct 30 00:06:03.843619 sshd-session[4899]: pam_unix(sshd:session): session closed for user core Oct 30 00:06:03.854559 systemd[1]: sshd@16-147.182.197.56:22-139.178.89.65:55172.service: Deactivated successfully. Oct 30 00:06:03.857998 systemd[1]: session-17.scope: Deactivated successfully. Oct 30 00:06:03.862483 systemd-logind[1479]: Session 17 logged out. Waiting for processes to exit. Oct 30 00:06:03.867562 systemd[1]: Started sshd@17-147.182.197.56:22-139.178.89.65:55184.service - OpenSSH per-connection server daemon (139.178.89.65:55184). Oct 30 00:06:03.868865 systemd-logind[1479]: Removed session 17. Oct 30 00:06:03.947582 sshd[4912]: Accepted publickey for core from 139.178.89.65 port 55184 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:06:03.950373 sshd-session[4912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:06:03.958340 systemd-logind[1479]: New session 18 of user core. Oct 30 00:06:03.964338 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 30 00:06:04.940833 sshd[4915]: Connection closed by 139.178.89.65 port 55184 Oct 30 00:06:04.940077 sshd-session[4912]: pam_unix(sshd:session): session closed for user core Oct 30 00:06:04.957277 systemd[1]: sshd@17-147.182.197.56:22-139.178.89.65:55184.service: Deactivated successfully. Oct 30 00:06:04.963798 systemd[1]: session-18.scope: Deactivated successfully. Oct 30 00:06:04.965622 systemd-logind[1479]: Session 18 logged out. Waiting for processes to exit. Oct 30 00:06:04.974469 systemd[1]: Started sshd@18-147.182.197.56:22-139.178.89.65:55190.service - OpenSSH per-connection server daemon (139.178.89.65:55190). Oct 30 00:06:04.975832 systemd-logind[1479]: Removed session 18. Oct 30 00:06:05.098675 sshd[4933]: Accepted publickey for core from 139.178.89.65 port 55190 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:06:05.102408 sshd-session[4933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:06:05.113204 systemd-logind[1479]: New session 19 of user core. Oct 30 00:06:05.118381 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 30 00:06:05.552430 sshd[4937]: Connection closed by 139.178.89.65 port 55190 Oct 30 00:06:05.553643 sshd-session[4933]: pam_unix(sshd:session): session closed for user core Oct 30 00:06:05.574833 systemd[1]: sshd@18-147.182.197.56:22-139.178.89.65:55190.service: Deactivated successfully. Oct 30 00:06:05.580803 systemd[1]: session-19.scope: Deactivated successfully. Oct 30 00:06:05.583594 systemd-logind[1479]: Session 19 logged out. Waiting for processes to exit. Oct 30 00:06:05.589172 systemd[1]: Started sshd@19-147.182.197.56:22-139.178.89.65:55194.service - OpenSSH per-connection server daemon (139.178.89.65:55194). Oct 30 00:06:05.590735 systemd-logind[1479]: Removed session 19. Oct 30 00:06:05.680115 sshd[4947]: Accepted publickey for core from 139.178.89.65 port 55194 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:06:05.682039 sshd-session[4947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:06:05.692189 systemd-logind[1479]: New session 20 of user core. Oct 30 00:06:05.700399 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 30 00:06:05.871418 sshd[4950]: Connection closed by 139.178.89.65 port 55194 Oct 30 00:06:05.871817 sshd-session[4947]: pam_unix(sshd:session): session closed for user core Oct 30 00:06:05.878676 systemd[1]: sshd@19-147.182.197.56:22-139.178.89.65:55194.service: Deactivated successfully. Oct 30 00:06:05.885664 systemd[1]: session-20.scope: Deactivated successfully. Oct 30 00:06:05.890461 systemd-logind[1479]: Session 20 logged out. Waiting for processes to exit. Oct 30 00:06:05.893896 systemd-logind[1479]: Removed session 20. Oct 30 00:06:07.109470 kubelet[2689]: E1030 00:06:07.108243 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wtvtg" podUID="7ce87a9a-4a9f-4e2a-b7f9-1e809a938d71" Oct 30 00:06:10.097264 kubelet[2689]: E1030 00:06:10.097016 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7668ff9dd9-jn9tg" podUID="041ed311-1a2e-462d-ace8-65f00add4557" Oct 30 00:06:10.886006 systemd[1]: Started sshd@20-147.182.197.56:22-139.178.89.65:56798.service - OpenSSH per-connection server daemon (139.178.89.65:56798). Oct 30 00:06:11.016869 sshd[4966]: Accepted publickey for core from 139.178.89.65 port 56798 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:06:11.020399 sshd-session[4966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:06:11.027758 systemd-logind[1479]: New session 21 of user core. Oct 30 00:06:11.035545 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 30 00:06:11.102732 kubelet[2689]: E1030 00:06:11.102359 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7668ff9dd9-98c6b" podUID="7fdc9fa2-26e8-4238-9f3b-2e6c25ad7e62" Oct 30 00:06:11.106130 kubelet[2689]: E1030 00:06:11.105370 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7vb2j" podUID="06390243-fcd9-4c68-9f88-5b23f795b967" Oct 30 00:06:11.370182 sshd[4969]: Connection closed by 139.178.89.65 port 56798 Oct 30 00:06:11.372261 sshd-session[4966]: pam_unix(sshd:session): session closed for user core Oct 30 00:06:11.376645 systemd-logind[1479]: Session 21 logged out. Waiting for processes to exit. Oct 30 00:06:11.380363 systemd[1]: sshd@20-147.182.197.56:22-139.178.89.65:56798.service: Deactivated successfully. Oct 30 00:06:11.384264 systemd[1]: session-21.scope: Deactivated successfully. Oct 30 00:06:11.387596 systemd-logind[1479]: Removed session 21. Oct 30 00:06:13.098928 kubelet[2689]: E1030 00:06:13.098869 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 30 00:06:15.096502 kubelet[2689]: E1030 00:06:15.096402 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c6b9bd746-st5j9" podUID="d930ac2e-f4f2-4b3f-a87d-015fa72b1a3c" Oct 30 00:06:15.098584 kubelet[2689]: E1030 00:06:15.098518 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548d886cd6-g6b4q" podUID="05f3fe96-a4e2-497a-aa78-f94004b3a92a" Oct 30 00:06:15.561842 containerd[1510]: time="2025-10-30T00:06:15.561361639Z" level=info msg="TaskExit event in podsandbox handler container_id:\"17071a5081fb294960f3e2df3891f176f5a1638eaf47c0f1804b1b5326b86aed\" id:\"ab40c54df2c4500e9c81342471e06f69587a7a860a75a2181de0995127d59428\" pid:4995 exited_at:{seconds:1761782775 nanos:560791996}" Oct 30 00:06:16.386436 systemd[1]: Started sshd@21-147.182.197.56:22-139.178.89.65:36856.service - OpenSSH per-connection server daemon (139.178.89.65:36856). Oct 30 00:06:16.480891 sshd[5007]: Accepted publickey for core from 139.178.89.65 port 36856 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:06:16.484128 sshd-session[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:06:16.492500 systemd-logind[1479]: New session 22 of user core. Oct 30 00:06:16.500341 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 30 00:06:16.665239 sshd[5010]: Connection closed by 139.178.89.65 port 36856 Oct 30 00:06:16.665442 sshd-session[5007]: pam_unix(sshd:session): session closed for user core Oct 30 00:06:16.671961 systemd-logind[1479]: Session 22 logged out. Waiting for processes to exit. Oct 30 00:06:16.672512 systemd[1]: sshd@21-147.182.197.56:22-139.178.89.65:36856.service: Deactivated successfully. Oct 30 00:06:16.677609 systemd[1]: session-22.scope: Deactivated successfully. Oct 30 00:06:16.681458 systemd-logind[1479]: Removed session 22. Oct 30 00:06:21.681490 systemd[1]: Started sshd@22-147.182.197.56:22-139.178.89.65:36872.service - OpenSSH per-connection server daemon (139.178.89.65:36872). Oct 30 00:06:21.755331 sshd[5022]: Accepted publickey for core from 139.178.89.65 port 36872 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:06:21.757077 sshd-session[5022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:06:21.769234 systemd-logind[1479]: New session 23 of user core. Oct 30 00:06:21.772367 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 30 00:06:21.944564 sshd[5025]: Connection closed by 139.178.89.65 port 36872 Oct 30 00:06:21.945635 sshd-session[5022]: pam_unix(sshd:session): session closed for user core Oct 30 00:06:21.950907 systemd[1]: sshd@22-147.182.197.56:22-139.178.89.65:36872.service: Deactivated successfully. Oct 30 00:06:21.955383 systemd[1]: session-23.scope: Deactivated successfully. Oct 30 00:06:21.959837 systemd-logind[1479]: Session 23 logged out. Waiting for processes to exit. Oct 30 00:06:21.965453 systemd-logind[1479]: Removed session 23. Oct 30 00:06:22.098532 kubelet[2689]: E1030 00:06:22.098477 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7668ff9dd9-jn9tg" podUID="041ed311-1a2e-462d-ace8-65f00add4557" Oct 30 00:06:22.100732 kubelet[2689]: E1030 00:06:22.099858 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wtvtg" podUID="7ce87a9a-4a9f-4e2a-b7f9-1e809a938d71" Oct 30 00:06:22.101465 kubelet[2689]: E1030 00:06:22.100675 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7vb2j" podUID="06390243-fcd9-4c68-9f88-5b23f795b967" Oct 30 00:06:26.101680 kubelet[2689]: E1030 00:06:26.101622 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548d886cd6-g6b4q" podUID="05f3fe96-a4e2-497a-aa78-f94004b3a92a" Oct 30 00:06:26.104517 kubelet[2689]: E1030 00:06:26.104477 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7668ff9dd9-98c6b" podUID="7fdc9fa2-26e8-4238-9f3b-2e6c25ad7e62" Oct 30 00:06:26.965904 systemd[1]: Started sshd@23-147.182.197.56:22-139.178.89.65:36962.service - OpenSSH per-connection server daemon (139.178.89.65:36962). Oct 30 00:06:27.055948 sshd[5038]: Accepted publickey for core from 139.178.89.65 port 36962 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:06:27.057944 sshd-session[5038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:06:27.067439 systemd-logind[1479]: New session 24 of user core. Oct 30 00:06:27.073451 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 30 00:06:27.098171 kubelet[2689]: E1030 00:06:27.097774 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c6b9bd746-st5j9" podUID="d930ac2e-f4f2-4b3f-a87d-015fa72b1a3c" Oct 30 00:06:27.279523 sshd[5041]: Connection closed by 139.178.89.65 port 36962 Oct 30 00:06:27.279380 sshd-session[5038]: pam_unix(sshd:session): session closed for user core Oct 30 00:06:27.287506 systemd[1]: sshd@23-147.182.197.56:22-139.178.89.65:36962.service: Deactivated successfully. Oct 30 00:06:27.289923 systemd[1]: session-24.scope: Deactivated successfully. Oct 30 00:06:27.292176 systemd-logind[1479]: Session 24 logged out. Waiting for processes to exit. Oct 30 00:06:27.294184 systemd-logind[1479]: Removed session 24. Oct 30 00:06:29.095086 kubelet[2689]: E1030 00:06:29.094548 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"