Nov 24 07:00:19.884547 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Nov 23 20:49:05 -00 2025 Nov 24 07:00:19.884573 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 07:00:19.884586 kernel: BIOS-provided physical RAM map: Nov 24 07:00:19.884592 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 24 07:00:19.884599 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 24 07:00:19.884605 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 24 07:00:19.884613 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Nov 24 07:00:19.884624 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Nov 24 07:00:19.884630 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 24 07:00:19.884637 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 24 07:00:19.884644 kernel: NX (Execute Disable) protection: active Nov 24 07:00:19.884653 kernel: APIC: Static calls initialized Nov 24 07:00:19.884660 kernel: SMBIOS 2.8 present. Nov 24 07:00:19.884667 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 24 07:00:19.884676 kernel: DMI: Memory slots populated: 1/1 Nov 24 07:00:19.884683 kernel: Hypervisor detected: KVM Nov 24 07:00:19.884696 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 24 07:00:19.884704 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 24 07:00:19.884712 kernel: kvm-clock: using sched offset of 5369371963 cycles Nov 24 07:00:19.884720 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 24 07:00:19.884728 kernel: tsc: Detected 2494.134 MHz processor Nov 24 07:00:19.884738 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 24 07:00:19.884754 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 24 07:00:19.884764 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 24 07:00:19.884775 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 24 07:00:19.884786 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 24 07:00:19.884802 kernel: ACPI: Early table checksum verification disabled Nov 24 07:00:19.884814 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Nov 24 07:00:19.884824 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 07:00:19.884831 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 07:00:19.884839 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 07:00:19.884847 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 24 07:00:19.884855 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 07:00:19.884862 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 07:00:19.884873 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 07:00:19.884882 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 07:00:19.884895 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 24 07:00:19.884906 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 24 07:00:19.884916 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 24 07:00:19.884926 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 24 07:00:19.884943 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 24 07:00:19.884957 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 24 07:00:19.884969 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 24 07:00:19.884981 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 24 07:00:19.884994 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 24 07:00:19.885006 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Nov 24 07:00:19.885018 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Nov 24 07:00:19.885031 kernel: Zone ranges: Nov 24 07:00:19.885047 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 24 07:00:19.885060 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Nov 24 07:00:19.885068 kernel: Normal empty Nov 24 07:00:19.885076 kernel: Device empty Nov 24 07:00:19.885084 kernel: Movable zone start for each node Nov 24 07:00:19.885092 kernel: Early memory node ranges Nov 24 07:00:19.885100 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 24 07:00:19.885108 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Nov 24 07:00:19.885116 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Nov 24 07:00:19.885124 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 24 07:00:19.885135 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 24 07:00:19.885143 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Nov 24 07:00:19.885151 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 24 07:00:19.885163 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 24 07:00:19.885172 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 24 07:00:19.885181 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 24 07:00:19.885190 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 24 07:00:19.885198 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 24 07:00:19.885209 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 24 07:00:19.885220 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 24 07:00:19.885674 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 24 07:00:19.885689 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 24 07:00:19.885701 kernel: TSC deadline timer available Nov 24 07:00:19.885712 kernel: CPU topo: Max. logical packages: 1 Nov 24 07:00:19.885723 kernel: CPU topo: Max. logical dies: 1 Nov 24 07:00:19.885736 kernel: CPU topo: Max. dies per package: 1 Nov 24 07:00:19.885748 kernel: CPU topo: Max. threads per core: 1 Nov 24 07:00:19.885760 kernel: CPU topo: Num. cores per package: 2 Nov 24 07:00:19.885780 kernel: CPU topo: Num. threads per package: 2 Nov 24 07:00:19.885795 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 24 07:00:19.885808 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 24 07:00:19.885822 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 24 07:00:19.885837 kernel: Booting paravirtualized kernel on KVM Nov 24 07:00:19.885852 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 24 07:00:19.885866 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 24 07:00:19.885881 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 24 07:00:19.885897 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 24 07:00:19.885912 kernel: pcpu-alloc: [0] 0 1 Nov 24 07:00:19.885924 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 24 07:00:19.885938 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 07:00:19.885950 kernel: random: crng init done Nov 24 07:00:19.885963 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 24 07:00:19.885975 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 24 07:00:19.885987 kernel: Fallback order for Node 0: 0 Nov 24 07:00:19.886000 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Nov 24 07:00:19.886013 kernel: Policy zone: DMA32 Nov 24 07:00:19.886030 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 24 07:00:19.886038 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 24 07:00:19.886046 kernel: Kernel/User page tables isolation: enabled Nov 24 07:00:19.886054 kernel: ftrace: allocating 40103 entries in 157 pages Nov 24 07:00:19.886062 kernel: ftrace: allocated 157 pages with 5 groups Nov 24 07:00:19.886070 kernel: Dynamic Preempt: voluntary Nov 24 07:00:19.886079 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 24 07:00:19.886089 kernel: rcu: RCU event tracing is enabled. Nov 24 07:00:19.886097 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 24 07:00:19.886108 kernel: Trampoline variant of Tasks RCU enabled. Nov 24 07:00:19.886117 kernel: Rude variant of Tasks RCU enabled. Nov 24 07:00:19.886125 kernel: Tracing variant of Tasks RCU enabled. Nov 24 07:00:19.886133 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 24 07:00:19.886141 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 24 07:00:19.886150 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 07:00:19.886163 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 07:00:19.886172 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 07:00:19.886180 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 24 07:00:19.886192 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 24 07:00:19.886200 kernel: Console: colour VGA+ 80x25 Nov 24 07:00:19.886208 kernel: printk: legacy console [tty0] enabled Nov 24 07:00:19.886217 kernel: printk: legacy console [ttyS0] enabled Nov 24 07:00:19.886225 kernel: ACPI: Core revision 20240827 Nov 24 07:00:19.886234 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 24 07:00:19.886266 kernel: APIC: Switch to symmetric I/O mode setup Nov 24 07:00:19.886278 kernel: x2apic enabled Nov 24 07:00:19.886287 kernel: APIC: Switched APIC routing to: physical x2apic Nov 24 07:00:19.886295 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 24 07:00:19.886304 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns Nov 24 07:00:19.886319 kernel: Calibrating delay loop (skipped) preset value.. 4988.26 BogoMIPS (lpj=2494134) Nov 24 07:00:19.886328 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 24 07:00:19.886337 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 24 07:00:19.886346 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 24 07:00:19.886354 kernel: Spectre V2 : Mitigation: Retpolines Nov 24 07:00:19.886366 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 24 07:00:19.886375 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 24 07:00:19.886386 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 24 07:00:19.886403 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 24 07:00:19.886415 kernel: MDS: Mitigation: Clear CPU buffers Nov 24 07:00:19.886428 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 24 07:00:19.886441 kernel: active return thunk: its_return_thunk Nov 24 07:00:19.886451 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 24 07:00:19.886460 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 24 07:00:19.886472 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 24 07:00:19.886481 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 24 07:00:19.886489 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 24 07:00:19.886498 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 24 07:00:19.886507 kernel: Freeing SMP alternatives memory: 32K Nov 24 07:00:19.886519 kernel: pid_max: default: 32768 minimum: 301 Nov 24 07:00:19.886532 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 24 07:00:19.886544 kernel: landlock: Up and running. Nov 24 07:00:19.886568 kernel: SELinux: Initializing. Nov 24 07:00:19.886586 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 24 07:00:19.886600 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 24 07:00:19.886612 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 24 07:00:19.886628 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 24 07:00:19.886642 kernel: signal: max sigframe size: 1776 Nov 24 07:00:19.886656 kernel: rcu: Hierarchical SRCU implementation. Nov 24 07:00:19.886669 kernel: rcu: Max phase no-delay instances is 400. Nov 24 07:00:19.886678 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 24 07:00:19.886687 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 24 07:00:19.886699 kernel: smp: Bringing up secondary CPUs ... Nov 24 07:00:19.886711 kernel: smpboot: x86: Booting SMP configuration: Nov 24 07:00:19.886720 kernel: .... node #0, CPUs: #1 Nov 24 07:00:19.886729 kernel: smp: Brought up 1 node, 2 CPUs Nov 24 07:00:19.886738 kernel: smpboot: Total of 2 processors activated (9976.53 BogoMIPS) Nov 24 07:00:19.886747 kernel: Memory: 1958716K/2096612K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46200K init, 2560K bss, 133332K reserved, 0K cma-reserved) Nov 24 07:00:19.886756 kernel: devtmpfs: initialized Nov 24 07:00:19.886765 kernel: x86/mm: Memory block size: 128MB Nov 24 07:00:19.886773 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 24 07:00:19.886786 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 24 07:00:19.886794 kernel: pinctrl core: initialized pinctrl subsystem Nov 24 07:00:19.886803 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 24 07:00:19.886812 kernel: audit: initializing netlink subsys (disabled) Nov 24 07:00:19.886820 kernel: audit: type=2000 audit(1763967616.292:1): state=initialized audit_enabled=0 res=1 Nov 24 07:00:19.886829 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 24 07:00:19.886837 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 24 07:00:19.886846 kernel: cpuidle: using governor menu Nov 24 07:00:19.886855 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 24 07:00:19.886866 kernel: dca service started, version 1.12.1 Nov 24 07:00:19.886875 kernel: PCI: Using configuration type 1 for base access Nov 24 07:00:19.886884 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 24 07:00:19.886892 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 24 07:00:19.886901 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 24 07:00:19.886910 kernel: ACPI: Added _OSI(Module Device) Nov 24 07:00:19.886918 kernel: ACPI: Added _OSI(Processor Device) Nov 24 07:00:19.886927 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 24 07:00:19.886936 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 24 07:00:19.886947 kernel: ACPI: Interpreter enabled Nov 24 07:00:19.886956 kernel: ACPI: PM: (supports S0 S5) Nov 24 07:00:19.886965 kernel: ACPI: Using IOAPIC for interrupt routing Nov 24 07:00:19.886974 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 24 07:00:19.886982 kernel: PCI: Using E820 reservations for host bridge windows Nov 24 07:00:19.886991 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 24 07:00:19.887000 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 24 07:00:19.889487 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 24 07:00:19.889618 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 24 07:00:19.889714 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 24 07:00:19.889727 kernel: acpiphp: Slot [3] registered Nov 24 07:00:19.889736 kernel: acpiphp: Slot [4] registered Nov 24 07:00:19.889745 kernel: acpiphp: Slot [5] registered Nov 24 07:00:19.889754 kernel: acpiphp: Slot [6] registered Nov 24 07:00:19.889763 kernel: acpiphp: Slot [7] registered Nov 24 07:00:19.889771 kernel: acpiphp: Slot [8] registered Nov 24 07:00:19.889784 kernel: acpiphp: Slot [9] registered Nov 24 07:00:19.889793 kernel: acpiphp: Slot [10] registered Nov 24 07:00:19.889802 kernel: acpiphp: Slot [11] registered Nov 24 07:00:19.889811 kernel: acpiphp: Slot [12] registered Nov 24 07:00:19.889820 kernel: acpiphp: Slot [13] registered Nov 24 07:00:19.889828 kernel: acpiphp: Slot [14] registered Nov 24 07:00:19.889837 kernel: acpiphp: Slot [15] registered Nov 24 07:00:19.889846 kernel: acpiphp: Slot [16] registered Nov 24 07:00:19.889854 kernel: acpiphp: Slot [17] registered Nov 24 07:00:19.889864 kernel: acpiphp: Slot [18] registered Nov 24 07:00:19.889875 kernel: acpiphp: Slot [19] registered Nov 24 07:00:19.889884 kernel: acpiphp: Slot [20] registered Nov 24 07:00:19.889893 kernel: acpiphp: Slot [21] registered Nov 24 07:00:19.889901 kernel: acpiphp: Slot [22] registered Nov 24 07:00:19.889910 kernel: acpiphp: Slot [23] registered Nov 24 07:00:19.889919 kernel: acpiphp: Slot [24] registered Nov 24 07:00:19.889928 kernel: acpiphp: Slot [25] registered Nov 24 07:00:19.889936 kernel: acpiphp: Slot [26] registered Nov 24 07:00:19.889945 kernel: acpiphp: Slot [27] registered Nov 24 07:00:19.889956 kernel: acpiphp: Slot [28] registered Nov 24 07:00:19.889965 kernel: acpiphp: Slot [29] registered Nov 24 07:00:19.889973 kernel: acpiphp: Slot [30] registered Nov 24 07:00:19.889982 kernel: acpiphp: Slot [31] registered Nov 24 07:00:19.889991 kernel: PCI host bridge to bus 0000:00 Nov 24 07:00:19.890123 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 24 07:00:19.890231 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 24 07:00:19.890351 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 24 07:00:19.890479 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 24 07:00:19.890624 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 24 07:00:19.890749 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 24 07:00:19.890912 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Nov 24 07:00:19.891055 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Nov 24 07:00:19.891164 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Nov 24 07:00:19.893042 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Nov 24 07:00:19.893167 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Nov 24 07:00:19.893275 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Nov 24 07:00:19.893381 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Nov 24 07:00:19.893474 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Nov 24 07:00:19.893590 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Nov 24 07:00:19.893683 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Nov 24 07:00:19.893792 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Nov 24 07:00:19.893940 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 24 07:00:19.894071 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 24 07:00:19.894201 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Nov 24 07:00:19.894328 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Nov 24 07:00:19.894423 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Nov 24 07:00:19.894574 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Nov 24 07:00:19.894718 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Nov 24 07:00:19.894818 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 24 07:00:19.894930 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 24 07:00:19.895029 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Nov 24 07:00:19.895121 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Nov 24 07:00:19.895214 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Nov 24 07:00:19.895343 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 24 07:00:19.895438 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Nov 24 07:00:19.895570 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Nov 24 07:00:19.895664 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 24 07:00:19.895773 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Nov 24 07:00:19.895877 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Nov 24 07:00:19.896011 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Nov 24 07:00:19.896113 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 24 07:00:19.896214 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 24 07:00:19.896341 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Nov 24 07:00:19.896447 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Nov 24 07:00:19.896539 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Nov 24 07:00:19.896655 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 24 07:00:19.896757 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Nov 24 07:00:19.896902 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Nov 24 07:00:19.897016 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Nov 24 07:00:19.898426 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Nov 24 07:00:19.898565 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Nov 24 07:00:19.898687 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 24 07:00:19.898705 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 24 07:00:19.898726 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 24 07:00:19.898739 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 24 07:00:19.898751 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 24 07:00:19.898763 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 24 07:00:19.898775 kernel: iommu: Default domain type: Translated Nov 24 07:00:19.898788 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 24 07:00:19.898801 kernel: PCI: Using ACPI for IRQ routing Nov 24 07:00:19.898814 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 24 07:00:19.898828 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 24 07:00:19.898858 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Nov 24 07:00:19.900472 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 24 07:00:19.900622 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 24 07:00:19.900749 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 24 07:00:19.900766 kernel: vgaarb: loaded Nov 24 07:00:19.900780 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 24 07:00:19.900793 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 24 07:00:19.900806 kernel: clocksource: Switched to clocksource kvm-clock Nov 24 07:00:19.900819 kernel: VFS: Disk quotas dquot_6.6.0 Nov 24 07:00:19.900838 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 24 07:00:19.900851 kernel: pnp: PnP ACPI init Nov 24 07:00:19.900864 kernel: pnp: PnP ACPI: found 4 devices Nov 24 07:00:19.900877 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 24 07:00:19.900890 kernel: NET: Registered PF_INET protocol family Nov 24 07:00:19.900903 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 24 07:00:19.900916 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 24 07:00:19.900929 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 24 07:00:19.900942 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 24 07:00:19.900958 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 24 07:00:19.900970 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 24 07:00:19.900983 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 24 07:00:19.900996 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 24 07:00:19.901009 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 24 07:00:19.901022 kernel: NET: Registered PF_XDP protocol family Nov 24 07:00:19.901142 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 24 07:00:19.901267 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 24 07:00:19.901389 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 24 07:00:19.901499 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 24 07:00:19.901609 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 24 07:00:19.901739 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 24 07:00:19.901869 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 24 07:00:19.901887 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 24 07:00:19.902013 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 25354 usecs Nov 24 07:00:19.902029 kernel: PCI: CLS 0 bytes, default 64 Nov 24 07:00:19.902046 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 24 07:00:19.902060 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns Nov 24 07:00:19.902073 kernel: Initialise system trusted keyrings Nov 24 07:00:19.902086 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 24 07:00:19.902099 kernel: Key type asymmetric registered Nov 24 07:00:19.902112 kernel: Asymmetric key parser 'x509' registered Nov 24 07:00:19.902124 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 24 07:00:19.902137 kernel: io scheduler mq-deadline registered Nov 24 07:00:19.902151 kernel: io scheduler kyber registered Nov 24 07:00:19.902167 kernel: io scheduler bfq registered Nov 24 07:00:19.902180 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 24 07:00:19.902194 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 24 07:00:19.902206 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 24 07:00:19.902219 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 24 07:00:19.902233 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 24 07:00:19.908344 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 24 07:00:19.908364 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 24 07:00:19.908378 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 24 07:00:19.908399 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 24 07:00:19.908635 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 24 07:00:19.908656 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 24 07:00:19.908772 kernel: rtc_cmos 00:03: registered as rtc0 Nov 24 07:00:19.908885 kernel: rtc_cmos 00:03: setting system clock to 2025-11-24T07:00:19 UTC (1763967619) Nov 24 07:00:19.908998 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 24 07:00:19.909014 kernel: intel_pstate: CPU model not supported Nov 24 07:00:19.909027 kernel: NET: Registered PF_INET6 protocol family Nov 24 07:00:19.909046 kernel: Segment Routing with IPv6 Nov 24 07:00:19.909059 kernel: In-situ OAM (IOAM) with IPv6 Nov 24 07:00:19.909072 kernel: NET: Registered PF_PACKET protocol family Nov 24 07:00:19.909085 kernel: Key type dns_resolver registered Nov 24 07:00:19.909099 kernel: IPI shorthand broadcast: enabled Nov 24 07:00:19.909112 kernel: sched_clock: Marking stable (3545004432, 180535616)->(3871014546, -145474498) Nov 24 07:00:19.909125 kernel: registered taskstats version 1 Nov 24 07:00:19.909138 kernel: Loading compiled-in X.509 certificates Nov 24 07:00:19.909151 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 960cbe7f2b1ea74b5c881d6d42eea4d1ac19a607' Nov 24 07:00:19.909167 kernel: Demotion targets for Node 0: null Nov 24 07:00:19.909180 kernel: Key type .fscrypt registered Nov 24 07:00:19.909193 kernel: Key type fscrypt-provisioning registered Nov 24 07:00:19.909227 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 24 07:00:19.909259 kernel: ima: Allocated hash algorithm: sha1 Nov 24 07:00:19.909273 kernel: ima: No architecture policies found Nov 24 07:00:19.909286 kernel: clk: Disabling unused clocks Nov 24 07:00:19.909300 kernel: Warning: unable to open an initial console. Nov 24 07:00:19.909314 kernel: Freeing unused kernel image (initmem) memory: 46200K Nov 24 07:00:19.909332 kernel: Write protecting the kernel read-only data: 40960k Nov 24 07:00:19.909345 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 24 07:00:19.909359 kernel: Run /init as init process Nov 24 07:00:19.909372 kernel: with arguments: Nov 24 07:00:19.909385 kernel: /init Nov 24 07:00:19.909399 kernel: with environment: Nov 24 07:00:19.909412 kernel: HOME=/ Nov 24 07:00:19.909425 kernel: TERM=linux Nov 24 07:00:19.909439 systemd[1]: Successfully made /usr/ read-only. Nov 24 07:00:19.909461 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 07:00:19.909475 systemd[1]: Detected virtualization kvm. Nov 24 07:00:19.909488 systemd[1]: Detected architecture x86-64. Nov 24 07:00:19.909501 systemd[1]: Running in initrd. Nov 24 07:00:19.909515 systemd[1]: No hostname configured, using default hostname. Nov 24 07:00:19.909529 systemd[1]: Hostname set to . Nov 24 07:00:19.909543 systemd[1]: Initializing machine ID from VM UUID. Nov 24 07:00:19.909560 systemd[1]: Queued start job for default target initrd.target. Nov 24 07:00:19.909574 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 07:00:19.909589 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 07:00:19.909604 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 24 07:00:19.909618 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 07:00:19.909632 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 24 07:00:19.909650 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 24 07:00:19.909666 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 24 07:00:19.909680 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 24 07:00:19.909694 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 07:00:19.909708 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 07:00:19.909722 systemd[1]: Reached target paths.target - Path Units. Nov 24 07:00:19.909740 systemd[1]: Reached target slices.target - Slice Units. Nov 24 07:00:19.909754 systemd[1]: Reached target swap.target - Swaps. Nov 24 07:00:19.909768 systemd[1]: Reached target timers.target - Timer Units. Nov 24 07:00:19.909782 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 07:00:19.909797 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 07:00:19.909811 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 24 07:00:19.909825 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 24 07:00:19.909840 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 07:00:19.909854 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 07:00:19.909871 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 07:00:19.909885 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 07:00:19.909899 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 24 07:00:19.909912 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 07:00:19.909926 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 24 07:00:19.909941 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 24 07:00:19.909955 systemd[1]: Starting systemd-fsck-usr.service... Nov 24 07:00:19.909969 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 07:00:19.909989 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 07:00:19.910003 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 07:00:19.910017 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 24 07:00:19.910072 systemd-journald[193]: Collecting audit messages is disabled. Nov 24 07:00:19.910107 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 07:00:19.910122 systemd[1]: Finished systemd-fsck-usr.service. Nov 24 07:00:19.910137 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 24 07:00:19.910153 systemd-journald[193]: Journal started Nov 24 07:00:19.910185 systemd-journald[193]: Runtime Journal (/run/log/journal/3d204a19f1c740b6b91bc6454e4ceeb8) is 4.9M, max 39.2M, 34.3M free. Nov 24 07:00:19.897443 systemd-modules-load[194]: Inserted module 'overlay' Nov 24 07:00:19.918265 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 07:00:19.942276 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 24 07:00:19.943902 systemd-modules-load[194]: Inserted module 'br_netfilter' Nov 24 07:00:19.981468 kernel: Bridge firewalling registered Nov 24 07:00:19.982006 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 07:00:19.982856 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 24 07:00:19.983884 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 07:00:19.988135 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 24 07:00:19.990382 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 07:00:19.993469 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 07:00:19.995867 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 07:00:20.024170 systemd-tmpfiles[214]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 24 07:00:20.028975 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 07:00:20.033630 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 07:00:20.034433 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 07:00:20.038832 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 07:00:20.040477 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 07:00:20.043423 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 24 07:00:20.070955 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 07:00:20.087622 systemd-resolved[231]: Positive Trust Anchors: Nov 24 07:00:20.088335 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 07:00:20.089040 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 07:00:20.094480 systemd-resolved[231]: Defaulting to hostname 'linux'. Nov 24 07:00:20.096211 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 07:00:20.097425 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 07:00:20.181320 kernel: SCSI subsystem initialized Nov 24 07:00:20.193273 kernel: Loading iSCSI transport class v2.0-870. Nov 24 07:00:20.206288 kernel: iscsi: registered transport (tcp) Nov 24 07:00:20.233406 kernel: iscsi: registered transport (qla4xxx) Nov 24 07:00:20.233494 kernel: QLogic iSCSI HBA Driver Nov 24 07:00:20.257116 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 07:00:20.275995 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 07:00:20.277067 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 07:00:20.341818 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 24 07:00:20.345282 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 24 07:00:20.408338 kernel: raid6: avx2x4 gen() 14409 MB/s Nov 24 07:00:20.426304 kernel: raid6: avx2x2 gen() 14947 MB/s Nov 24 07:00:20.444638 kernel: raid6: avx2x1 gen() 11359 MB/s Nov 24 07:00:20.444721 kernel: raid6: using algorithm avx2x2 gen() 14947 MB/s Nov 24 07:00:20.463800 kernel: raid6: .... xor() 11464 MB/s, rmw enabled Nov 24 07:00:20.463899 kernel: raid6: using avx2x2 recovery algorithm Nov 24 07:00:20.494292 kernel: xor: automatically using best checksumming function avx Nov 24 07:00:20.758316 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 24 07:00:20.768540 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 24 07:00:20.772349 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 07:00:20.808153 systemd-udevd[441]: Using default interface naming scheme 'v255'. Nov 24 07:00:20.818939 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 07:00:20.822992 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 24 07:00:20.852514 dracut-pre-trigger[444]: rd.md=0: removing MD RAID activation Nov 24 07:00:20.891392 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 07:00:20.893620 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 07:00:20.983946 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 07:00:20.989388 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 24 07:00:21.053273 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Nov 24 07:00:21.058828 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 24 07:00:21.074820 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 24 07:00:21.074901 kernel: GPT:9289727 != 125829119 Nov 24 07:00:21.074914 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 24 07:00:21.076745 kernel: GPT:9289727 != 125829119 Nov 24 07:00:21.077503 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 24 07:00:21.079496 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 24 07:00:21.084264 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Nov 24 07:00:21.098427 kernel: scsi host0: Virtio SCSI HBA Nov 24 07:00:21.126268 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Nov 24 07:00:21.136272 kernel: cryptd: max_cpu_qlen set to 1000 Nov 24 07:00:21.136355 kernel: libata version 3.00 loaded. Nov 24 07:00:21.141309 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Nov 24 07:00:21.160358 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 24 07:00:21.173334 kernel: AES CTR mode by8 optimization enabled Nov 24 07:00:21.176263 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 24 07:00:21.182259 kernel: scsi host1: ata_piix Nov 24 07:00:21.187051 kernel: scsi host2: ata_piix Nov 24 07:00:21.187301 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Nov 24 07:00:21.187316 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Nov 24 07:00:21.192378 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 07:00:21.193279 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 07:00:21.195343 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 07:00:21.199509 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 07:00:21.202305 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 24 07:00:21.225279 kernel: ACPI: bus type USB registered Nov 24 07:00:21.227264 kernel: usbcore: registered new interface driver usbfs Nov 24 07:00:21.232266 kernel: usbcore: registered new interface driver hub Nov 24 07:00:21.235266 kernel: usbcore: registered new device driver usb Nov 24 07:00:21.286519 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 24 07:00:21.339927 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 07:00:21.359440 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 24 07:00:21.378725 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 24 07:00:21.386184 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 24 07:00:21.386424 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 24 07:00:21.386656 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 24 07:00:21.386804 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Nov 24 07:00:21.386923 kernel: hub 1-0:1.0: USB hub found Nov 24 07:00:21.387095 kernel: hub 1-0:1.0: 2 ports detected Nov 24 07:00:21.398132 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 24 07:00:21.399597 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 24 07:00:21.401274 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 24 07:00:21.403090 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 07:00:21.404032 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 07:00:21.405134 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 07:00:21.407331 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 24 07:00:21.410452 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 24 07:00:21.438434 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 24 07:00:21.443272 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 24 07:00:21.443757 disk-uuid[595]: Primary Header is updated. Nov 24 07:00:21.443757 disk-uuid[595]: Secondary Entries is updated. Nov 24 07:00:21.443757 disk-uuid[595]: Secondary Header is updated. Nov 24 07:00:22.464305 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 24 07:00:22.465530 disk-uuid[603]: The operation has completed successfully. Nov 24 07:00:22.539719 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 24 07:00:22.539887 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 24 07:00:22.559918 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 24 07:00:22.579124 sh[614]: Success Nov 24 07:00:22.600283 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 24 07:00:22.602677 kernel: device-mapper: uevent: version 1.0.3 Nov 24 07:00:22.602755 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 24 07:00:22.615281 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Nov 24 07:00:22.678344 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 24 07:00:22.683396 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 24 07:00:22.705006 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 24 07:00:22.717844 kernel: BTRFS: device fsid 3af95a3e-5df6-49e0-91e3-ddf2109f68c7 devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (626) Nov 24 07:00:22.717944 kernel: BTRFS info (device dm-0): first mount of filesystem 3af95a3e-5df6-49e0-91e3-ddf2109f68c7 Nov 24 07:00:22.719623 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 24 07:00:22.729428 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 24 07:00:22.729555 kernel: BTRFS info (device dm-0): enabling free space tree Nov 24 07:00:22.732259 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 24 07:00:22.733551 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 24 07:00:22.734220 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 24 07:00:22.735432 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 24 07:00:22.739432 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 24 07:00:22.779345 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (659) Nov 24 07:00:22.783940 kernel: BTRFS info (device vda6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 07:00:22.784046 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 24 07:00:22.791499 kernel: BTRFS info (device vda6): turning on async discard Nov 24 07:00:22.791609 kernel: BTRFS info (device vda6): enabling free space tree Nov 24 07:00:22.799368 kernel: BTRFS info (device vda6): last unmount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 07:00:22.800366 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 24 07:00:22.805577 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 24 07:00:22.925349 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 07:00:22.932544 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 07:00:22.980345 systemd-networkd[795]: lo: Link UP Nov 24 07:00:22.981078 systemd-networkd[795]: lo: Gained carrier Nov 24 07:00:22.984509 systemd-networkd[795]: Enumeration completed Nov 24 07:00:22.985442 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 07:00:22.986459 systemd-networkd[795]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 24 07:00:22.986464 systemd-networkd[795]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 24 07:00:22.987542 systemd[1]: Reached target network.target - Network. Nov 24 07:00:22.990516 systemd-networkd[795]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 07:00:22.990521 systemd-networkd[795]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 07:00:22.993028 systemd-networkd[795]: eth0: Link UP Nov 24 07:00:22.993293 systemd-networkd[795]: eth1: Link UP Nov 24 07:00:22.993481 systemd-networkd[795]: eth0: Gained carrier Nov 24 07:00:22.993498 systemd-networkd[795]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 24 07:00:22.998121 systemd-networkd[795]: eth1: Gained carrier Nov 24 07:00:22.998141 systemd-networkd[795]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 07:00:23.012537 systemd-networkd[795]: eth0: DHCPv4 address 24.144.92.64/20, gateway 24.144.80.1 acquired from 169.254.169.253 Nov 24 07:00:23.032476 systemd-networkd[795]: eth1: DHCPv4 address 10.124.0.16/20 acquired from 169.254.169.253 Nov 24 07:00:23.040877 ignition[708]: Ignition 2.22.0 Nov 24 07:00:23.040907 ignition[708]: Stage: fetch-offline Nov 24 07:00:23.041081 ignition[708]: no configs at "/usr/lib/ignition/base.d" Nov 24 07:00:23.041098 ignition[708]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 24 07:00:23.041307 ignition[708]: parsed url from cmdline: "" Nov 24 07:00:23.041314 ignition[708]: no config URL provided Nov 24 07:00:23.041339 ignition[708]: reading system config file "/usr/lib/ignition/user.ign" Nov 24 07:00:23.044558 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 07:00:23.041384 ignition[708]: no config at "/usr/lib/ignition/user.ign" Nov 24 07:00:23.041394 ignition[708]: failed to fetch config: resource requires networking Nov 24 07:00:23.041688 ignition[708]: Ignition finished successfully Nov 24 07:00:23.048848 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 24 07:00:23.096451 ignition[804]: Ignition 2.22.0 Nov 24 07:00:23.097312 ignition[804]: Stage: fetch Nov 24 07:00:23.097980 ignition[804]: no configs at "/usr/lib/ignition/base.d" Nov 24 07:00:23.098453 ignition[804]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 24 07:00:23.098607 ignition[804]: parsed url from cmdline: "" Nov 24 07:00:23.098611 ignition[804]: no config URL provided Nov 24 07:00:23.098620 ignition[804]: reading system config file "/usr/lib/ignition/user.ign" Nov 24 07:00:23.098629 ignition[804]: no config at "/usr/lib/ignition/user.ign" Nov 24 07:00:23.098662 ignition[804]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 24 07:00:23.113157 ignition[804]: GET result: OK Nov 24 07:00:23.113505 ignition[804]: parsing config with SHA512: 727fa1cfbcf758daa2c781d677a5fa222c677caffc4b74097cb0dface5e12b6e52c0b8ba4e4d4c87bac9cf92df256b59383147046833f6312d734af426e17c26 Nov 24 07:00:23.119102 unknown[804]: fetched base config from "system" Nov 24 07:00:23.119114 unknown[804]: fetched base config from "system" Nov 24 07:00:23.119797 ignition[804]: fetch: fetch complete Nov 24 07:00:23.119121 unknown[804]: fetched user config from "digitalocean" Nov 24 07:00:23.119809 ignition[804]: fetch: fetch passed Nov 24 07:00:23.119872 ignition[804]: Ignition finished successfully Nov 24 07:00:23.122598 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 24 07:00:23.124306 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 24 07:00:23.174663 ignition[810]: Ignition 2.22.0 Nov 24 07:00:23.174683 ignition[810]: Stage: kargs Nov 24 07:00:23.174865 ignition[810]: no configs at "/usr/lib/ignition/base.d" Nov 24 07:00:23.174876 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 24 07:00:23.177293 ignition[810]: kargs: kargs passed Nov 24 07:00:23.177350 ignition[810]: Ignition finished successfully Nov 24 07:00:23.179775 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 24 07:00:23.182028 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 24 07:00:23.222189 ignition[816]: Ignition 2.22.0 Nov 24 07:00:23.222204 ignition[816]: Stage: disks Nov 24 07:00:23.222413 ignition[816]: no configs at "/usr/lib/ignition/base.d" Nov 24 07:00:23.222429 ignition[816]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 24 07:00:23.225167 ignition[816]: disks: disks passed Nov 24 07:00:23.225232 ignition[816]: Ignition finished successfully Nov 24 07:00:23.228717 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 24 07:00:23.230843 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 24 07:00:23.231524 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 24 07:00:23.232576 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 07:00:23.233508 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 07:00:23.234642 systemd[1]: Reached target basic.target - Basic System. Nov 24 07:00:23.237113 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 24 07:00:23.268979 systemd-fsck[825]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 24 07:00:23.272413 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 24 07:00:23.276387 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 24 07:00:23.405282 kernel: EXT4-fs (vda9): mounted filesystem f89e2a65-2a4a-426b-9659-02844cc29a2a r/w with ordered data mode. Quota mode: none. Nov 24 07:00:23.405735 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 24 07:00:23.407983 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 24 07:00:23.411362 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 07:00:23.413824 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 24 07:00:23.422054 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Nov 24 07:00:23.429177 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 24 07:00:23.446225 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (833) Nov 24 07:00:23.446451 kernel: BTRFS info (device vda6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 07:00:23.446466 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 24 07:00:23.446480 kernel: BTRFS info (device vda6): turning on async discard Nov 24 07:00:23.446504 kernel: BTRFS info (device vda6): enabling free space tree Nov 24 07:00:23.429926 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 24 07:00:23.430042 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 07:00:23.455482 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 07:00:23.456533 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 24 07:00:23.464422 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 24 07:00:23.525249 coreos-metadata[835]: Nov 24 07:00:23.524 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 24 07:00:23.533756 coreos-metadata[836]: Nov 24 07:00:23.533 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 24 07:00:23.538057 coreos-metadata[835]: Nov 24 07:00:23.537 INFO Fetch successful Nov 24 07:00:23.543480 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory Nov 24 07:00:23.548994 coreos-metadata[836]: Nov 24 07:00:23.548 INFO Fetch successful Nov 24 07:00:23.550757 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Nov 24 07:00:23.551371 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Nov 24 07:00:23.556493 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory Nov 24 07:00:23.560396 coreos-metadata[836]: Nov 24 07:00:23.560 INFO wrote hostname ci-4459.2.1-b-419a632674 to /sysroot/etc/hostname Nov 24 07:00:23.562156 initrd-setup-root[879]: cut: /sysroot/etc/shadow: No such file or directory Nov 24 07:00:23.563300 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 24 07:00:23.568303 initrd-setup-root[887]: cut: /sysroot/etc/gshadow: No such file or directory Nov 24 07:00:23.692203 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 24 07:00:23.694924 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 24 07:00:23.698440 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 24 07:00:23.723293 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 24 07:00:23.726539 kernel: BTRFS info (device vda6): last unmount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 07:00:23.741473 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 24 07:00:23.772585 ignition[956]: INFO : Ignition 2.22.0 Nov 24 07:00:23.772585 ignition[956]: INFO : Stage: mount Nov 24 07:00:23.773849 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 07:00:23.773849 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 24 07:00:23.775049 ignition[956]: INFO : mount: mount passed Nov 24 07:00:23.775049 ignition[956]: INFO : Ignition finished successfully Nov 24 07:00:23.777175 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 24 07:00:23.779530 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 24 07:00:23.805383 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 07:00:23.839275 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (966) Nov 24 07:00:23.842662 kernel: BTRFS info (device vda6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 07:00:23.842736 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 24 07:00:23.849255 kernel: BTRFS info (device vda6): turning on async discard Nov 24 07:00:23.849358 kernel: BTRFS info (device vda6): enabling free space tree Nov 24 07:00:23.851968 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 07:00:23.897420 ignition[983]: INFO : Ignition 2.22.0 Nov 24 07:00:23.897420 ignition[983]: INFO : Stage: files Nov 24 07:00:23.898950 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 07:00:23.898950 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 24 07:00:23.898950 ignition[983]: DEBUG : files: compiled without relabeling support, skipping Nov 24 07:00:23.901353 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 24 07:00:23.901353 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 24 07:00:23.903057 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 24 07:00:23.903057 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 24 07:00:23.903057 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 24 07:00:23.902715 unknown[983]: wrote ssh authorized keys file for user: core Nov 24 07:00:23.906451 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 24 07:00:23.906451 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 24 07:00:23.947079 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 24 07:00:24.019089 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 24 07:00:24.020612 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 24 07:00:24.020612 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 24 07:00:24.020612 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 24 07:00:24.020612 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 24 07:00:24.020612 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 07:00:24.020612 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 07:00:24.020612 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 07:00:24.020612 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 07:00:24.032320 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 07:00:24.032320 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 07:00:24.032320 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 24 07:00:24.032320 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 24 07:00:24.032320 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 24 07:00:24.032320 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 24 07:00:24.299358 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 24 07:00:24.581617 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 24 07:00:24.582768 ignition[983]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 24 07:00:24.583566 ignition[983]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 07:00:24.584476 ignition[983]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 07:00:24.584476 ignition[983]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 24 07:00:24.584476 ignition[983]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 24 07:00:24.587831 ignition[983]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 24 07:00:24.587831 ignition[983]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 24 07:00:24.587831 ignition[983]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 24 07:00:24.587831 ignition[983]: INFO : files: files passed Nov 24 07:00:24.587831 ignition[983]: INFO : Ignition finished successfully Nov 24 07:00:24.587370 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 24 07:00:24.589379 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 24 07:00:24.596396 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 24 07:00:24.607092 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 24 07:00:24.607213 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 24 07:00:24.615260 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 07:00:24.615260 initrd-setup-root-after-ignition[1013]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 24 07:00:24.618466 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 07:00:24.621486 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 07:00:24.622317 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 24 07:00:24.624086 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 24 07:00:24.677476 systemd-networkd[795]: eth0: Gained IPv6LL Nov 24 07:00:24.686311 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 24 07:00:24.686475 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 24 07:00:24.688364 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 24 07:00:24.689603 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 24 07:00:24.690229 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 24 07:00:24.692428 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 24 07:00:24.739060 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 07:00:24.742869 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 24 07:00:24.766729 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 24 07:00:24.767932 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 07:00:24.769149 systemd[1]: Stopped target timers.target - Timer Units. Nov 24 07:00:24.770282 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 24 07:00:24.770489 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 07:00:24.772514 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 24 07:00:24.773506 systemd[1]: Stopped target basic.target - Basic System. Nov 24 07:00:24.774129 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 24 07:00:24.775139 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 07:00:24.776161 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 24 07:00:24.777199 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 24 07:00:24.778696 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 24 07:00:24.779981 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 07:00:24.781235 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 24 07:00:24.782341 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 24 07:00:24.783525 systemd[1]: Stopped target swap.target - Swaps. Nov 24 07:00:24.784317 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 24 07:00:24.784456 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 24 07:00:24.785550 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 24 07:00:24.786092 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 07:00:24.787110 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 24 07:00:24.787229 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 07:00:24.788086 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 24 07:00:24.788263 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 24 07:00:24.789354 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 24 07:00:24.789471 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 07:00:24.790750 systemd[1]: ignition-files.service: Deactivated successfully. Nov 24 07:00:24.790928 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 24 07:00:24.791561 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 24 07:00:24.791744 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 24 07:00:24.794380 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 24 07:00:24.795458 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 24 07:00:24.796378 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 07:00:24.800481 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 24 07:00:24.800981 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 24 07:00:24.801166 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 07:00:24.804306 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 24 07:00:24.804920 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 07:00:24.810981 systemd-networkd[795]: eth1: Gained IPv6LL Nov 24 07:00:24.825665 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 24 07:00:24.825773 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 24 07:00:24.841928 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 24 07:00:24.857941 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 24 07:00:24.858828 ignition[1037]: INFO : Ignition 2.22.0 Nov 24 07:00:24.858828 ignition[1037]: INFO : Stage: umount Nov 24 07:00:24.858828 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 07:00:24.858828 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 24 07:00:24.858828 ignition[1037]: INFO : umount: umount passed Nov 24 07:00:24.858828 ignition[1037]: INFO : Ignition finished successfully Nov 24 07:00:24.858057 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 24 07:00:24.860577 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 24 07:00:24.860758 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 24 07:00:24.868525 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 24 07:00:24.868626 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 24 07:00:24.869217 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 24 07:00:24.869319 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 24 07:00:24.870482 systemd[1]: Stopped target network.target - Network. Nov 24 07:00:24.876817 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 24 07:00:24.876915 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 07:00:24.877881 systemd[1]: Stopped target paths.target - Path Units. Nov 24 07:00:24.878910 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 24 07:00:24.884349 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 07:00:24.884891 systemd[1]: Stopped target slices.target - Slice Units. Nov 24 07:00:24.885763 systemd[1]: Stopped target sockets.target - Socket Units. Nov 24 07:00:24.886581 systemd[1]: iscsid.socket: Deactivated successfully. Nov 24 07:00:24.886637 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 07:00:24.887385 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 24 07:00:24.887420 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 07:00:24.888296 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 24 07:00:24.888365 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 24 07:00:24.889217 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 24 07:00:24.889289 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 24 07:00:24.890372 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 24 07:00:24.891134 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 24 07:00:24.892667 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 24 07:00:24.892767 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 24 07:00:24.894746 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 24 07:00:24.894833 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 24 07:00:24.900217 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 24 07:00:24.900418 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 24 07:00:24.905107 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 24 07:00:24.905807 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 24 07:00:24.905864 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 07:00:24.907872 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 24 07:00:24.908127 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 24 07:00:24.908336 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 24 07:00:24.910004 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 24 07:00:24.910646 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 24 07:00:24.911691 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 24 07:00:24.911733 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 24 07:00:24.913709 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 24 07:00:24.915529 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 24 07:00:24.915596 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 07:00:24.918661 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 24 07:00:24.918722 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 24 07:00:24.921353 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 24 07:00:24.921406 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 24 07:00:24.923078 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 07:00:24.927899 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 24 07:00:24.942407 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 24 07:00:24.943284 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 07:00:24.944846 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 24 07:00:24.945467 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 24 07:00:24.947529 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 24 07:00:24.948144 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 24 07:00:24.949315 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 24 07:00:24.949876 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 07:00:24.950926 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 24 07:00:24.950998 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 24 07:00:24.951722 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 24 07:00:24.951769 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 24 07:00:24.952310 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 24 07:00:24.952374 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 07:00:24.954653 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 24 07:00:24.955947 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 24 07:00:24.956010 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 07:00:24.958429 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 24 07:00:24.958484 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 07:00:24.959542 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 07:00:24.959589 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 07:00:24.972638 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 24 07:00:24.972793 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 24 07:00:24.974102 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 24 07:00:24.975650 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 24 07:00:25.012683 systemd[1]: Switching root. Nov 24 07:00:25.065969 systemd-journald[193]: Journal stopped Nov 24 07:00:26.590445 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Nov 24 07:00:26.590569 kernel: SELinux: policy capability network_peer_controls=1 Nov 24 07:00:26.590597 kernel: SELinux: policy capability open_perms=1 Nov 24 07:00:26.590617 kernel: SELinux: policy capability extended_socket_class=1 Nov 24 07:00:26.590633 kernel: SELinux: policy capability always_check_network=0 Nov 24 07:00:26.590660 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 24 07:00:26.590676 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 24 07:00:26.590692 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 24 07:00:26.590709 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 24 07:00:26.590726 kernel: SELinux: policy capability userspace_initial_context=0 Nov 24 07:00:26.590743 kernel: audit: type=1403 audit(1763967625.220:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 24 07:00:26.590763 systemd[1]: Successfully loaded SELinux policy in 77.083ms. Nov 24 07:00:26.590801 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.053ms. Nov 24 07:00:26.590822 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 07:00:26.590848 systemd[1]: Detected virtualization kvm. Nov 24 07:00:26.590888 systemd[1]: Detected architecture x86-64. Nov 24 07:00:26.590912 systemd[1]: Detected first boot. Nov 24 07:00:26.590931 systemd[1]: Hostname set to . Nov 24 07:00:26.590948 systemd[1]: Initializing machine ID from VM UUID. Nov 24 07:00:26.590962 zram_generator::config[1084]: No configuration found. Nov 24 07:00:26.590982 kernel: Guest personality initialized and is inactive Nov 24 07:00:26.590994 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Nov 24 07:00:26.591010 kernel: Initialized host personality Nov 24 07:00:26.591021 kernel: NET: Registered PF_VSOCK protocol family Nov 24 07:00:26.591033 systemd[1]: Populated /etc with preset unit settings. Nov 24 07:00:26.591048 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 24 07:00:26.591061 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 24 07:00:26.591078 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 24 07:00:26.591090 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 24 07:00:26.591103 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 24 07:00:26.591118 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 24 07:00:26.591131 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 24 07:00:26.591142 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 24 07:00:26.591155 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 24 07:00:26.591168 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 24 07:00:26.591180 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 24 07:00:26.591201 systemd[1]: Created slice user.slice - User and Session Slice. Nov 24 07:00:26.591214 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 07:00:26.591227 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 07:00:26.597410 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 24 07:00:26.597517 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 24 07:00:26.597533 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 24 07:00:26.597547 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 07:00:26.597560 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 24 07:00:26.597572 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 07:00:26.597594 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 07:00:26.597607 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 24 07:00:26.597628 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 24 07:00:26.597641 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 24 07:00:26.597654 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 24 07:00:26.597666 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 07:00:26.597679 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 07:00:26.597691 systemd[1]: Reached target slices.target - Slice Units. Nov 24 07:00:26.597703 systemd[1]: Reached target swap.target - Swaps. Nov 24 07:00:26.597715 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 24 07:00:26.597731 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 24 07:00:26.597742 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 24 07:00:26.597754 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 07:00:26.597766 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 07:00:26.597778 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 07:00:26.597790 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 24 07:00:26.597808 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 24 07:00:26.597821 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 24 07:00:26.597833 systemd[1]: Mounting media.mount - External Media Directory... Nov 24 07:00:26.597847 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 07:00:26.597859 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 24 07:00:26.597872 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 24 07:00:26.597884 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 24 07:00:26.597896 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 24 07:00:26.597909 systemd[1]: Reached target machines.target - Containers. Nov 24 07:00:26.597922 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 24 07:00:26.597934 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 07:00:26.597950 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 07:00:26.597961 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 24 07:00:26.597974 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 07:00:26.597987 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 07:00:26.597999 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 07:00:26.598011 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 24 07:00:26.598023 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 07:00:26.598036 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 24 07:00:26.598051 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 24 07:00:26.598062 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 24 07:00:26.598080 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 24 07:00:26.598092 systemd[1]: Stopped systemd-fsck-usr.service. Nov 24 07:00:26.598105 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 07:00:26.598117 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 07:00:26.598130 kernel: loop: module loaded Nov 24 07:00:26.598144 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 07:00:26.598156 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 07:00:26.598171 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 24 07:00:26.598184 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 24 07:00:26.598199 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 07:00:26.598214 systemd[1]: verity-setup.service: Deactivated successfully. Nov 24 07:00:26.598226 systemd[1]: Stopped verity-setup.service. Nov 24 07:00:26.598250 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 07:00:26.598264 kernel: ACPI: bus type drm_connector registered Nov 24 07:00:26.598276 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 24 07:00:26.598288 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 24 07:00:26.598300 systemd[1]: Mounted media.mount - External Media Directory. Nov 24 07:00:26.598315 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 24 07:00:26.598327 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 24 07:00:26.598339 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 24 07:00:26.598351 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 07:00:26.598369 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 07:00:26.598381 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 07:00:26.598393 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 07:00:26.598405 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 07:00:26.598417 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 07:00:26.598431 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 07:00:26.598443 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 07:00:26.598455 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 07:00:26.598467 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 07:00:26.598479 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 24 07:00:26.598490 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 24 07:00:26.598516 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 24 07:00:26.598535 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 07:00:26.598553 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 07:00:26.598570 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 24 07:00:26.598582 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 24 07:00:26.598597 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 24 07:00:26.598610 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 07:00:26.598623 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 24 07:00:26.598635 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 24 07:00:26.598648 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 07:00:26.598726 systemd-journald[1159]: Collecting audit messages is disabled. Nov 24 07:00:26.598756 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 24 07:00:26.598769 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 07:00:26.598781 kernel: fuse: init (API version 7.41) Nov 24 07:00:26.598793 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 24 07:00:26.598806 systemd-journald[1159]: Journal started Nov 24 07:00:26.598832 systemd-journald[1159]: Runtime Journal (/run/log/journal/3d204a19f1c740b6b91bc6454e4ceeb8) is 4.9M, max 39.2M, 34.3M free. Nov 24 07:00:26.088637 systemd[1]: Queued start job for default target multi-user.target. Nov 24 07:00:26.116205 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 24 07:00:26.117080 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 24 07:00:26.612206 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 24 07:00:26.615633 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 07:00:26.618179 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 24 07:00:26.619647 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 24 07:00:26.619900 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 24 07:00:26.622066 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 07:00:26.623874 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 24 07:00:26.625263 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 07:00:26.658596 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 24 07:00:26.679277 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 24 07:00:26.681154 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 07:00:26.690472 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 24 07:00:26.691755 kernel: loop0: detected capacity change from 0 to 224512 Nov 24 07:00:26.708486 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 24 07:00:26.718807 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 24 07:00:26.723960 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 24 07:00:26.728487 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 24 07:00:26.747515 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 24 07:00:26.751592 systemd-journald[1159]: Time spent on flushing to /var/log/journal/3d204a19f1c740b6b91bc6454e4ceeb8 is 18.008ms for 1013 entries. Nov 24 07:00:26.751592 systemd-journald[1159]: System Journal (/var/log/journal/3d204a19f1c740b6b91bc6454e4ceeb8) is 8M, max 195.6M, 187.6M free. Nov 24 07:00:26.774796 systemd-journald[1159]: Received client request to flush runtime journal. Nov 24 07:00:26.777463 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 24 07:00:26.788327 kernel: loop1: detected capacity change from 0 to 110984 Nov 24 07:00:26.799469 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 24 07:00:26.829108 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 07:00:26.838482 kernel: loop2: detected capacity change from 0 to 128560 Nov 24 07:00:26.879951 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 24 07:00:26.893926 kernel: loop3: detected capacity change from 0 to 8 Nov 24 07:00:26.883575 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 07:00:26.915284 kernel: loop4: detected capacity change from 0 to 224512 Nov 24 07:00:26.932377 kernel: loop5: detected capacity change from 0 to 110984 Nov 24 07:00:26.957476 kernel: loop6: detected capacity change from 0 to 128560 Nov 24 07:00:26.963460 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Nov 24 07:00:26.963485 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Nov 24 07:00:26.977134 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 07:00:27.006796 kernel: loop7: detected capacity change from 0 to 8 Nov 24 07:00:27.011552 (sd-merge)[1228]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Nov 24 07:00:27.012427 (sd-merge)[1228]: Merged extensions into '/usr'. Nov 24 07:00:27.033409 systemd[1]: Reload requested from client PID 1188 ('systemd-sysext') (unit systemd-sysext.service)... Nov 24 07:00:27.033438 systemd[1]: Reloading... Nov 24 07:00:27.211082 zram_generator::config[1258]: No configuration found. Nov 24 07:00:27.442288 ldconfig[1184]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 24 07:00:27.602969 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 24 07:00:27.603311 systemd[1]: Reloading finished in 569 ms. Nov 24 07:00:27.624644 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 24 07:00:27.625969 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 24 07:00:27.641167 systemd[1]: Starting ensure-sysext.service... Nov 24 07:00:27.647098 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 07:00:27.680598 systemd[1]: Reload requested from client PID 1299 ('systemctl') (unit ensure-sysext.service)... Nov 24 07:00:27.680623 systemd[1]: Reloading... Nov 24 07:00:27.717021 systemd-tmpfiles[1300]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 24 07:00:27.717053 systemd-tmpfiles[1300]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 24 07:00:27.717358 systemd-tmpfiles[1300]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 24 07:00:27.717638 systemd-tmpfiles[1300]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 24 07:00:27.720263 systemd-tmpfiles[1300]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 24 07:00:27.720677 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Nov 24 07:00:27.720736 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Nov 24 07:00:27.727483 systemd-tmpfiles[1300]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 07:00:27.727498 systemd-tmpfiles[1300]: Skipping /boot Nov 24 07:00:27.752019 systemd-tmpfiles[1300]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 07:00:27.752039 systemd-tmpfiles[1300]: Skipping /boot Nov 24 07:00:27.819360 zram_generator::config[1323]: No configuration found. Nov 24 07:00:28.138669 systemd[1]: Reloading finished in 457 ms. Nov 24 07:00:28.163061 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 24 07:00:28.178989 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 07:00:28.191372 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 07:00:28.196584 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 24 07:00:28.206806 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 24 07:00:28.212613 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 07:00:28.215921 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 07:00:28.220696 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 24 07:00:28.226692 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 07:00:28.227006 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 07:00:28.231877 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 07:00:28.235944 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 07:00:28.249746 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 07:00:28.250613 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 07:00:28.250823 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 07:00:28.250979 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 07:00:28.256176 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 07:00:28.256534 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 07:00:28.256829 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 07:00:28.256997 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 07:00:28.257154 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 07:00:28.267483 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 24 07:00:28.273790 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 07:00:28.274181 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 07:00:28.284235 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 07:00:28.285050 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 07:00:28.285274 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 07:00:28.285495 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 07:00:28.292946 systemd[1]: Finished ensure-sysext.service. Nov 24 07:00:28.302614 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 24 07:00:28.303886 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 24 07:00:28.337792 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 24 07:00:28.350639 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 24 07:00:28.353019 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 07:00:28.354452 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 07:00:28.357425 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 07:00:28.358364 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 07:00:28.363330 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 07:00:28.368820 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 07:00:28.370319 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 07:00:28.381294 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 07:00:28.382228 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 07:00:28.383106 systemd-udevd[1377]: Using default interface naming scheme 'v255'. Nov 24 07:00:28.384804 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 07:00:28.388290 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 24 07:00:28.391175 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 24 07:00:28.404150 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 24 07:00:28.406358 augenrules[1412]: No rules Nov 24 07:00:28.406856 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 07:00:28.407252 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 07:00:28.424887 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 07:00:28.431479 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 07:00:28.453124 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 24 07:00:28.559997 systemd-networkd[1423]: lo: Link UP Nov 24 07:00:28.560010 systemd-networkd[1423]: lo: Gained carrier Nov 24 07:00:28.561049 systemd-networkd[1423]: Enumeration completed Nov 24 07:00:28.561185 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 07:00:28.567634 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 24 07:00:28.575187 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 24 07:00:28.595820 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 24 07:00:28.708747 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 24 07:00:28.709542 systemd[1]: Reached target time-set.target - System Time Set. Nov 24 07:00:28.714267 systemd-resolved[1376]: Positive Trust Anchors: Nov 24 07:00:28.714639 systemd-resolved[1376]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 07:00:28.714724 systemd-resolved[1376]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 07:00:28.722721 systemd-resolved[1376]: Using system hostname 'ci-4459.2.1-b-419a632674'. Nov 24 07:00:28.725673 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 07:00:28.726865 systemd[1]: Reached target network.target - Network. Nov 24 07:00:28.727376 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 07:00:28.727885 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 07:00:28.728855 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 24 07:00:28.729414 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 24 07:00:28.729904 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 24 07:00:28.730786 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 24 07:00:28.731503 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 24 07:00:28.732325 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 24 07:00:28.732897 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 24 07:00:28.732941 systemd[1]: Reached target paths.target - Path Units. Nov 24 07:00:28.733959 systemd[1]: Reached target timers.target - Timer Units. Nov 24 07:00:28.735541 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 24 07:00:28.737668 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 24 07:00:28.743107 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 24 07:00:28.744589 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 24 07:00:28.745149 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 24 07:00:28.753817 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 24 07:00:28.755604 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 24 07:00:28.757728 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 24 07:00:28.760888 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 24 07:00:28.760933 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 07:00:28.763523 systemd[1]: Reached target basic.target - Basic System. Nov 24 07:00:28.764174 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 24 07:00:28.764215 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 24 07:00:28.766390 systemd[1]: Starting containerd.service - containerd container runtime... Nov 24 07:00:28.770232 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 24 07:00:28.774596 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 24 07:00:28.778027 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 24 07:00:28.787510 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 24 07:00:28.792612 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 24 07:00:28.795373 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 24 07:00:28.797217 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 24 07:00:28.807777 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 24 07:00:28.815121 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 24 07:00:28.820020 jq[1463]: false Nov 24 07:00:28.826287 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 24 07:00:28.833224 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 24 07:00:28.843627 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 24 07:00:28.846659 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 24 07:00:28.848066 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 24 07:00:28.851336 google_oslogin_nss_cache[1465]: oslogin_cache_refresh[1465]: Refreshing passwd entry cache Nov 24 07:00:28.851843 systemd[1]: Starting update-engine.service - Update Engine... Nov 24 07:00:28.853377 oslogin_cache_refresh[1465]: Refreshing passwd entry cache Nov 24 07:00:28.857771 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 24 07:00:28.865268 google_oslogin_nss_cache[1465]: oslogin_cache_refresh[1465]: Failure getting users, quitting Nov 24 07:00:28.867393 oslogin_cache_refresh[1465]: Failure getting users, quitting Nov 24 07:00:28.869230 google_oslogin_nss_cache[1465]: oslogin_cache_refresh[1465]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 07:00:28.869230 google_oslogin_nss_cache[1465]: oslogin_cache_refresh[1465]: Refreshing group entry cache Nov 24 07:00:28.869230 google_oslogin_nss_cache[1465]: oslogin_cache_refresh[1465]: Failure getting groups, quitting Nov 24 07:00:28.869230 google_oslogin_nss_cache[1465]: oslogin_cache_refresh[1465]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 07:00:28.867454 oslogin_cache_refresh[1465]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 07:00:28.867541 oslogin_cache_refresh[1465]: Refreshing group entry cache Nov 24 07:00:28.868205 oslogin_cache_refresh[1465]: Failure getting groups, quitting Nov 24 07:00:28.868219 oslogin_cache_refresh[1465]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 07:00:28.875323 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 24 07:00:28.876374 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 24 07:00:28.876626 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 24 07:00:28.876969 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 24 07:00:28.877802 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 24 07:00:28.900207 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 24 07:00:28.900528 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 24 07:00:28.927957 jq[1474]: true Nov 24 07:00:28.928208 extend-filesystems[1464]: Found /dev/vda6 Nov 24 07:00:28.954717 extend-filesystems[1464]: Found /dev/vda9 Nov 24 07:00:28.956390 jq[1496]: true Nov 24 07:00:28.965503 systemd[1]: motdgen.service: Deactivated successfully. Nov 24 07:00:28.971086 extend-filesystems[1464]: Checking size of /dev/vda9 Nov 24 07:00:28.967343 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 24 07:00:28.979364 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Nov 24 07:00:28.981670 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Nov 24 07:00:28.982159 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 24 07:00:28.993114 tar[1478]: linux-amd64/LICENSE Nov 24 07:00:28.993114 tar[1478]: linux-amd64/helm Nov 24 07:00:28.995328 update_engine[1473]: I20251124 07:00:28.991748 1473 main.cc:92] Flatcar Update Engine starting Nov 24 07:00:29.021270 kernel: ISO 9660 Extensions: RRIP_1991A Nov 24 07:00:29.023859 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Nov 24 07:00:29.027474 coreos-metadata[1460]: Nov 24 07:00:29.025 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 24 07:00:29.028983 dbus-daemon[1461]: [system] SELinux support is enabled Nov 24 07:00:29.029217 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 24 07:00:29.033194 coreos-metadata[1460]: Nov 24 07:00:29.031 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) Nov 24 07:00:29.032665 (ntainerd)[1499]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 24 07:00:29.033611 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 24 07:00:29.033655 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 24 07:00:29.035513 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 24 07:00:29.035601 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Nov 24 07:00:29.035620 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 24 07:00:29.070684 systemd-logind[1472]: New seat seat0. Nov 24 07:00:29.071308 systemd[1]: Started systemd-logind.service - User Login Management. Nov 24 07:00:29.075928 systemd[1]: Started update-engine.service - Update Engine. Nov 24 07:00:29.079606 update_engine[1473]: I20251124 07:00:29.076009 1473 update_check_scheduler.cc:74] Next update check in 5m21s Nov 24 07:00:29.103150 extend-filesystems[1464]: Resized partition /dev/vda9 Nov 24 07:00:29.108138 systemd-networkd[1423]: eth1: Configuring with /run/systemd/network/10-ae:93:6c:75:59:35.network. Nov 24 07:00:29.120459 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Nov 24 07:00:29.120554 extend-filesystems[1528]: resize2fs 1.47.3 (8-Jul-2025) Nov 24 07:00:29.123943 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 24 07:00:29.128043 systemd-networkd[1423]: eth1: Link UP Nov 24 07:00:29.140853 systemd-networkd[1423]: eth1: Gained carrier Nov 24 07:00:29.146473 systemd-networkd[1423]: eth0: Configuring with /run/systemd/network/10-8e:28:0d:9e:e5:d7.network. Nov 24 07:00:29.153431 systemd-networkd[1423]: eth0: Link UP Nov 24 07:00:29.159178 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 24 07:00:29.162456 systemd-networkd[1423]: eth0: Gained carrier Nov 24 07:00:29.162825 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Nov 24 07:00:29.173724 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 24 07:00:29.213449 bash[1539]: Updated "/home/core/.ssh/authorized_keys" Nov 24 07:00:29.216177 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 24 07:00:29.219411 systemd[1]: Starting sshkeys.service... Nov 24 07:00:29.278129 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 24 07:00:29.301104 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 24 07:00:29.308431 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 24 07:00:29.339833 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 24 07:00:29.365126 extend-filesystems[1528]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 24 07:00:29.365126 extend-filesystems[1528]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 24 07:00:29.365126 extend-filesystems[1528]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 24 07:00:29.369201 extend-filesystems[1464]: Resized filesystem in /dev/vda9 Nov 24 07:00:29.369192 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 24 07:00:29.369673 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 24 07:00:29.432294 coreos-metadata[1546]: Nov 24 07:00:29.431 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 24 07:00:29.453439 coreos-metadata[1546]: Nov 24 07:00:29.450 INFO Fetch successful Nov 24 07:00:29.471548 unknown[1546]: wrote ssh authorized keys file for user: core Nov 24 07:00:29.496641 locksmithd[1519]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 24 07:00:29.510645 update-ssh-keys[1559]: Updated "/home/core/.ssh/authorized_keys" Nov 24 07:00:29.513659 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 24 07:00:29.518643 systemd[1]: Finished sshkeys.service. Nov 24 07:00:29.539273 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 24 07:00:29.539647 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 24 07:00:29.553386 kernel: mousedev: PS/2 mouse device common for all mice Nov 24 07:00:29.553420 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 24 07:00:29.553434 kernel: ACPI: button: Power Button [PWRF] Nov 24 07:00:29.576507 sshd_keygen[1508]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 24 07:00:29.619953 containerd[1499]: time="2025-11-24T07:00:29Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 24 07:00:29.620455 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 24 07:00:29.627372 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 24 07:00:29.628508 containerd[1499]: time="2025-11-24T07:00:29.628463575Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 24 07:00:29.651207 systemd[1]: issuegen.service: Deactivated successfully. Nov 24 07:00:29.651904 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 24 07:00:29.660580 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 24 07:00:29.674518 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Nov 24 07:00:29.674613 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Nov 24 07:00:29.678734 containerd[1499]: time="2025-11-24T07:00:29.678684071Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.644µs" Nov 24 07:00:29.678734 containerd[1499]: time="2025-11-24T07:00:29.678720947Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 24 07:00:29.678734 containerd[1499]: time="2025-11-24T07:00:29.678741161Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 24 07:00:29.678929 containerd[1499]: time="2025-11-24T07:00:29.678912238Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 24 07:00:29.678956 containerd[1499]: time="2025-11-24T07:00:29.678930715Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 24 07:00:29.678977 containerd[1499]: time="2025-11-24T07:00:29.678956861Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 07:00:29.682375 containerd[1499]: time="2025-11-24T07:00:29.679010774Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 07:00:29.682375 containerd[1499]: time="2025-11-24T07:00:29.679025572Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 07:00:29.682375 containerd[1499]: time="2025-11-24T07:00:29.679298083Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 07:00:29.682375 containerd[1499]: time="2025-11-24T07:00:29.679313994Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 07:00:29.682375 containerd[1499]: time="2025-11-24T07:00:29.679356253Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 07:00:29.682375 containerd[1499]: time="2025-11-24T07:00:29.679365263Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 24 07:00:29.682375 containerd[1499]: time="2025-11-24T07:00:29.679462305Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 24 07:00:29.682375 containerd[1499]: time="2025-11-24T07:00:29.679664610Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 07:00:29.682375 containerd[1499]: time="2025-11-24T07:00:29.679695764Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 07:00:29.682375 containerd[1499]: time="2025-11-24T07:00:29.679705082Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 24 07:00:29.683665 containerd[1499]: time="2025-11-24T07:00:29.683581724Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 24 07:00:29.685032 containerd[1499]: time="2025-11-24T07:00:29.684986929Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 24 07:00:29.685148 containerd[1499]: time="2025-11-24T07:00:29.685131004Z" level=info msg="metadata content store policy set" policy=shared Nov 24 07:00:29.689728 kernel: Console: switching to colour dummy device 80x25 Nov 24 07:00:29.690845 containerd[1499]: time="2025-11-24T07:00:29.690796934Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 24 07:00:29.690993 containerd[1499]: time="2025-11-24T07:00:29.690961454Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 24 07:00:29.691023 containerd[1499]: time="2025-11-24T07:00:29.690991861Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 24 07:00:29.691023 containerd[1499]: time="2025-11-24T07:00:29.691006005Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 24 07:00:29.691023 containerd[1499]: time="2025-11-24T07:00:29.691019453Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 24 07:00:29.691081 containerd[1499]: time="2025-11-24T07:00:29.691029374Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 24 07:00:29.691081 containerd[1499]: time="2025-11-24T07:00:29.691040371Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 24 07:00:29.691081 containerd[1499]: time="2025-11-24T07:00:29.691051425Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 24 07:00:29.691081 containerd[1499]: time="2025-11-24T07:00:29.691062871Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 24 07:00:29.691081 containerd[1499]: time="2025-11-24T07:00:29.691072116Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 24 07:00:29.691177 containerd[1499]: time="2025-11-24T07:00:29.691082933Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 24 07:00:29.691177 containerd[1499]: time="2025-11-24T07:00:29.691094481Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 24 07:00:29.691277 containerd[1499]: time="2025-11-24T07:00:29.691230467Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 24 07:00:29.691305 containerd[1499]: time="2025-11-24T07:00:29.691290919Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 24 07:00:29.691334 containerd[1499]: time="2025-11-24T07:00:29.691307642Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 24 07:00:29.691334 containerd[1499]: time="2025-11-24T07:00:29.691318111Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 24 07:00:29.691334 containerd[1499]: time="2025-11-24T07:00:29.691328182Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 24 07:00:29.691391 containerd[1499]: time="2025-11-24T07:00:29.691337678Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 24 07:00:29.691391 containerd[1499]: time="2025-11-24T07:00:29.691349216Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 24 07:00:29.691391 containerd[1499]: time="2025-11-24T07:00:29.691360264Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 24 07:00:29.691391 containerd[1499]: time="2025-11-24T07:00:29.691372356Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 24 07:00:29.691391 containerd[1499]: time="2025-11-24T07:00:29.691382520Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 24 07:00:29.691493 containerd[1499]: time="2025-11-24T07:00:29.691392446Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 24 07:00:29.691493 containerd[1499]: time="2025-11-24T07:00:29.691439655Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 24 07:00:29.691493 containerd[1499]: time="2025-11-24T07:00:29.691452887Z" level=info msg="Start snapshots syncer" Nov 24 07:00:29.691493 containerd[1499]: time="2025-11-24T07:00:29.691472903Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 24 07:00:29.691804 containerd[1499]: time="2025-11-24T07:00:29.691760177Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 24 07:00:29.691929 containerd[1499]: time="2025-11-24T07:00:29.691821126Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 24 07:00:29.691929 containerd[1499]: time="2025-11-24T07:00:29.691884434Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 24 07:00:29.692005 containerd[1499]: time="2025-11-24T07:00:29.691989617Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 24 07:00:29.692054 containerd[1499]: time="2025-11-24T07:00:29.692040626Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 24 07:00:29.692079 containerd[1499]: time="2025-11-24T07:00:29.692057383Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 24 07:00:29.692079 containerd[1499]: time="2025-11-24T07:00:29.692067639Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 24 07:00:29.692119 containerd[1499]: time="2025-11-24T07:00:29.692079548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 24 07:00:29.692119 containerd[1499]: time="2025-11-24T07:00:29.692090159Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 24 07:00:29.692119 containerd[1499]: time="2025-11-24T07:00:29.692100626Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 24 07:00:29.692176 containerd[1499]: time="2025-11-24T07:00:29.692124726Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 24 07:00:29.692176 containerd[1499]: time="2025-11-24T07:00:29.692134990Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 24 07:00:29.692176 containerd[1499]: time="2025-11-24T07:00:29.692144404Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 24 07:00:29.692300 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 24 07:00:29.692325 kernel: [drm] features: -context_init Nov 24 07:00:29.692339 containerd[1499]: time="2025-11-24T07:00:29.692174968Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 07:00:29.692339 containerd[1499]: time="2025-11-24T07:00:29.692190685Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 07:00:29.692339 containerd[1499]: time="2025-11-24T07:00:29.692201312Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 07:00:29.692339 containerd[1499]: time="2025-11-24T07:00:29.692209608Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 07:00:29.692339 containerd[1499]: time="2025-11-24T07:00:29.692217284Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 24 07:00:29.692339 containerd[1499]: time="2025-11-24T07:00:29.692225056Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 24 07:00:29.696428 containerd[1499]: time="2025-11-24T07:00:29.696327319Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 24 07:00:29.696514 containerd[1499]: time="2025-11-24T07:00:29.696442126Z" level=info msg="runtime interface created" Nov 24 07:00:29.696514 containerd[1499]: time="2025-11-24T07:00:29.696450385Z" level=info msg="created NRI interface" Nov 24 07:00:29.696514 containerd[1499]: time="2025-11-24T07:00:29.696461252Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 24 07:00:29.696514 containerd[1499]: time="2025-11-24T07:00:29.696478972Z" level=info msg="Connect containerd service" Nov 24 07:00:29.696611 containerd[1499]: time="2025-11-24T07:00:29.696522007Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 24 07:00:29.697282 kernel: [drm] number of scanouts: 1 Nov 24 07:00:29.700559 containerd[1499]: time="2025-11-24T07:00:29.700305595Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 24 07:00:29.708270 kernel: [drm] number of cap sets: 0 Nov 24 07:00:29.724518 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 24 07:00:29.727727 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 24 07:00:29.731619 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 24 07:00:29.732495 systemd[1]: Reached target getty.target - Login Prompts. Nov 24 07:00:29.760270 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Nov 24 07:00:29.835266 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 24 07:00:29.845039 kernel: Console: switching to colour frame buffer device 128x48 Nov 24 07:00:29.877463 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 07:00:29.900949 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 24 07:00:30.033418 coreos-metadata[1460]: Nov 24 07:00:30.033 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 Nov 24 07:00:30.036203 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 07:00:30.046637 containerd[1499]: time="2025-11-24T07:00:30.046143486Z" level=info msg="Start subscribing containerd event" Nov 24 07:00:30.046637 containerd[1499]: time="2025-11-24T07:00:30.046211773Z" level=info msg="Start recovering state" Nov 24 07:00:30.046637 containerd[1499]: time="2025-11-24T07:00:30.046346755Z" level=info msg="Start event monitor" Nov 24 07:00:30.046637 containerd[1499]: time="2025-11-24T07:00:30.046360240Z" level=info msg="Start cni network conf syncer for default" Nov 24 07:00:30.046637 containerd[1499]: time="2025-11-24T07:00:30.046370269Z" level=info msg="Start streaming server" Nov 24 07:00:30.046637 containerd[1499]: time="2025-11-24T07:00:30.046388467Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 24 07:00:30.046637 containerd[1499]: time="2025-11-24T07:00:30.046397513Z" level=info msg="runtime interface starting up..." Nov 24 07:00:30.046637 containerd[1499]: time="2025-11-24T07:00:30.046404295Z" level=info msg="starting plugins..." Nov 24 07:00:30.046637 containerd[1499]: time="2025-11-24T07:00:30.046429443Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 24 07:00:30.048299 containerd[1499]: time="2025-11-24T07:00:30.048226653Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 24 07:00:30.048486 containerd[1499]: time="2025-11-24T07:00:30.048406154Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 24 07:00:30.048593 systemd[1]: Started containerd.service - containerd container runtime. Nov 24 07:00:30.059119 containerd[1499]: time="2025-11-24T07:00:30.059062189Z" level=info msg="containerd successfully booted in 0.439516s" Nov 24 07:00:30.070201 coreos-metadata[1460]: Nov 24 07:00:30.070 INFO Fetch successful Nov 24 07:00:30.106269 kernel: EDAC MC: Ver: 3.0.0 Nov 24 07:00:30.183301 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 24 07:00:30.185385 systemd-logind[1472]: Watching system buttons on /dev/input/event2 (Power Button) Nov 24 07:00:30.190544 systemd-logind[1472]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 24 07:00:30.216334 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 24 07:00:30.241544 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 07:00:30.241784 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 07:00:30.243073 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 07:00:30.247629 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 07:00:30.251145 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 24 07:00:30.308079 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 07:00:30.362169 tar[1478]: linux-amd64/README.md Nov 24 07:00:30.379972 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 24 07:00:30.565471 systemd-networkd[1423]: eth1: Gained IPv6LL Nov 24 07:00:30.569710 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 24 07:00:30.572230 systemd[1]: Reached target network-online.target - Network is Online. Nov 24 07:00:30.574912 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 07:00:30.580566 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 24 07:00:30.619270 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 24 07:00:31.142429 systemd-networkd[1423]: eth0: Gained IPv6LL Nov 24 07:00:31.643896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 07:00:31.645070 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 24 07:00:31.647455 systemd[1]: Startup finished in 3.616s (kernel) + 5.562s (initrd) + 6.501s (userspace) = 15.680s. Nov 24 07:00:31.655972 (kubelet)[1650]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 07:00:32.274073 kubelet[1650]: E1124 07:00:32.273979 1650 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 07:00:32.277129 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 07:00:32.277324 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 07:00:32.277713 systemd[1]: kubelet.service: Consumed 1.236s CPU time, 263.1M memory peak. Nov 24 07:00:33.073803 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 24 07:00:33.075380 systemd[1]: Started sshd@0-24.144.92.64:22-139.178.68.195:35230.service - OpenSSH per-connection server daemon (139.178.68.195:35230). Nov 24 07:00:33.200813 sshd[1661]: Accepted publickey for core from 139.178.68.195 port 35230 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:00:33.202923 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:00:33.210980 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 24 07:00:33.213105 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 24 07:00:33.224206 systemd-logind[1472]: New session 1 of user core. Nov 24 07:00:33.244446 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 24 07:00:33.248566 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 24 07:00:33.265683 (systemd)[1666]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 24 07:00:33.269490 systemd-logind[1472]: New session c1 of user core. Nov 24 07:00:33.431015 systemd[1666]: Queued start job for default target default.target. Nov 24 07:00:33.439713 systemd[1666]: Created slice app.slice - User Application Slice. Nov 24 07:00:33.439768 systemd[1666]: Reached target paths.target - Paths. Nov 24 07:00:33.439830 systemd[1666]: Reached target timers.target - Timers. Nov 24 07:00:33.441838 systemd[1666]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 24 07:00:33.476150 systemd[1666]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 24 07:00:33.476570 systemd[1666]: Reached target sockets.target - Sockets. Nov 24 07:00:33.476728 systemd[1666]: Reached target basic.target - Basic System. Nov 24 07:00:33.476894 systemd[1666]: Reached target default.target - Main User Target. Nov 24 07:00:33.476921 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 24 07:00:33.477104 systemd[1666]: Startup finished in 198ms. Nov 24 07:00:33.487055 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 24 07:00:33.559600 systemd[1]: Started sshd@1-24.144.92.64:22-139.178.68.195:35232.service - OpenSSH per-connection server daemon (139.178.68.195:35232). Nov 24 07:00:33.627505 sshd[1677]: Accepted publickey for core from 139.178.68.195 port 35232 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:00:33.629454 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:00:33.636185 systemd-logind[1472]: New session 2 of user core. Nov 24 07:00:33.643580 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 24 07:00:33.709204 sshd[1680]: Connection closed by 139.178.68.195 port 35232 Nov 24 07:00:33.710178 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Nov 24 07:00:33.721520 systemd[1]: sshd@1-24.144.92.64:22-139.178.68.195:35232.service: Deactivated successfully. Nov 24 07:00:33.723664 systemd[1]: session-2.scope: Deactivated successfully. Nov 24 07:00:33.724921 systemd-logind[1472]: Session 2 logged out. Waiting for processes to exit. Nov 24 07:00:33.729209 systemd[1]: Started sshd@2-24.144.92.64:22-139.178.68.195:35244.service - OpenSSH per-connection server daemon (139.178.68.195:35244). Nov 24 07:00:33.730950 systemd-logind[1472]: Removed session 2. Nov 24 07:00:33.797499 sshd[1686]: Accepted publickey for core from 139.178.68.195 port 35244 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:00:33.799762 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:00:33.805091 systemd-logind[1472]: New session 3 of user core. Nov 24 07:00:33.818649 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 24 07:00:33.879659 sshd[1689]: Connection closed by 139.178.68.195 port 35244 Nov 24 07:00:33.880225 sshd-session[1686]: pam_unix(sshd:session): session closed for user core Nov 24 07:00:33.895743 systemd[1]: sshd@2-24.144.92.64:22-139.178.68.195:35244.service: Deactivated successfully. Nov 24 07:00:33.899831 systemd[1]: session-3.scope: Deactivated successfully. Nov 24 07:00:33.901490 systemd-logind[1472]: Session 3 logged out. Waiting for processes to exit. Nov 24 07:00:33.907042 systemd[1]: Started sshd@3-24.144.92.64:22-139.178.68.195:35252.service - OpenSSH per-connection server daemon (139.178.68.195:35252). Nov 24 07:00:33.908950 systemd-logind[1472]: Removed session 3. Nov 24 07:00:33.969462 sshd[1695]: Accepted publickey for core from 139.178.68.195 port 35252 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:00:33.971813 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:00:33.977744 systemd-logind[1472]: New session 4 of user core. Nov 24 07:00:33.986618 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 24 07:00:34.049984 sshd[1698]: Connection closed by 139.178.68.195 port 35252 Nov 24 07:00:34.050545 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Nov 24 07:00:34.064691 systemd[1]: sshd@3-24.144.92.64:22-139.178.68.195:35252.service: Deactivated successfully. Nov 24 07:00:34.067083 systemd[1]: session-4.scope: Deactivated successfully. Nov 24 07:00:34.068113 systemd-logind[1472]: Session 4 logged out. Waiting for processes to exit. Nov 24 07:00:34.072314 systemd[1]: Started sshd@4-24.144.92.64:22-139.178.68.195:35258.service - OpenSSH per-connection server daemon (139.178.68.195:35258). Nov 24 07:00:34.074363 systemd-logind[1472]: Removed session 4. Nov 24 07:00:34.142284 sshd[1704]: Accepted publickey for core from 139.178.68.195 port 35258 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:00:34.143825 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:00:34.151218 systemd-logind[1472]: New session 5 of user core. Nov 24 07:00:34.157608 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 24 07:00:34.231659 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 24 07:00:34.231938 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 07:00:34.250482 sudo[1708]: pam_unix(sudo:session): session closed for user root Nov 24 07:00:34.257277 sshd[1707]: Connection closed by 139.178.68.195 port 35258 Nov 24 07:00:34.257710 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Nov 24 07:00:34.275522 systemd[1]: sshd@4-24.144.92.64:22-139.178.68.195:35258.service: Deactivated successfully. Nov 24 07:00:34.278424 systemd[1]: session-5.scope: Deactivated successfully. Nov 24 07:00:34.279497 systemd-logind[1472]: Session 5 logged out. Waiting for processes to exit. Nov 24 07:00:34.283705 systemd[1]: Started sshd@5-24.144.92.64:22-139.178.68.195:35262.service - OpenSSH per-connection server daemon (139.178.68.195:35262). Nov 24 07:00:34.285205 systemd-logind[1472]: Removed session 5. Nov 24 07:00:34.342410 sshd[1714]: Accepted publickey for core from 139.178.68.195 port 35262 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:00:34.344054 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:00:34.349580 systemd-logind[1472]: New session 6 of user core. Nov 24 07:00:34.359543 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 24 07:00:34.420372 sudo[1719]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 24 07:00:34.420679 sudo[1719]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 07:00:34.437149 sudo[1719]: pam_unix(sudo:session): session closed for user root Nov 24 07:00:34.445197 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 24 07:00:34.445886 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 07:00:34.457551 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 07:00:34.502852 augenrules[1741]: No rules Nov 24 07:00:34.503702 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 07:00:34.503922 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 07:00:34.505765 sudo[1718]: pam_unix(sudo:session): session closed for user root Nov 24 07:00:34.510278 sshd[1717]: Connection closed by 139.178.68.195 port 35262 Nov 24 07:00:34.510259 sshd-session[1714]: pam_unix(sshd:session): session closed for user core Nov 24 07:00:34.525695 systemd[1]: sshd@5-24.144.92.64:22-139.178.68.195:35262.service: Deactivated successfully. Nov 24 07:00:34.527904 systemd[1]: session-6.scope: Deactivated successfully. Nov 24 07:00:34.529350 systemd-logind[1472]: Session 6 logged out. Waiting for processes to exit. Nov 24 07:00:34.532631 systemd[1]: Started sshd@6-24.144.92.64:22-139.178.68.195:35264.service - OpenSSH per-connection server daemon (139.178.68.195:35264). Nov 24 07:00:34.533753 systemd-logind[1472]: Removed session 6. Nov 24 07:00:34.596831 sshd[1750]: Accepted publickey for core from 139.178.68.195 port 35264 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:00:34.598232 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:00:34.603399 systemd-logind[1472]: New session 7 of user core. Nov 24 07:00:34.611555 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 24 07:00:34.675667 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 24 07:00:34.676630 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 07:00:35.129621 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 24 07:00:35.150072 (dockerd)[1771]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 24 07:00:36.072172 systemd-resolved[1376]: Clock change detected. Flushing caches. Nov 24 07:00:36.073859 systemd-timesyncd[1397]: Contacted time server 216.240.36.24:123 (1.flatcar.pool.ntp.org). Nov 24 07:00:36.074082 systemd-timesyncd[1397]: Initial clock synchronization to Mon 2025-11-24 07:00:36.072066 UTC. Nov 24 07:00:36.078123 dockerd[1771]: time="2025-11-24T07:00:36.078025119Z" level=info msg="Starting up" Nov 24 07:00:36.078929 dockerd[1771]: time="2025-11-24T07:00:36.078839290Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 24 07:00:36.100092 dockerd[1771]: time="2025-11-24T07:00:36.100031641Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 24 07:00:36.118622 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport498250353-merged.mount: Deactivated successfully. Nov 24 07:00:36.183116 dockerd[1771]: time="2025-11-24T07:00:36.183045568Z" level=info msg="Loading containers: start." Nov 24 07:00:36.194056 kernel: Initializing XFRM netlink socket Nov 24 07:00:36.502775 systemd-networkd[1423]: docker0: Link UP Nov 24 07:00:36.506730 dockerd[1771]: time="2025-11-24T07:00:36.506611746Z" level=info msg="Loading containers: done." Nov 24 07:00:36.527961 dockerd[1771]: time="2025-11-24T07:00:36.527613287Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 24 07:00:36.527961 dockerd[1771]: time="2025-11-24T07:00:36.527761789Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 24 07:00:36.527961 dockerd[1771]: time="2025-11-24T07:00:36.527945678Z" level=info msg="Initializing buildkit" Nov 24 07:00:36.561322 dockerd[1771]: time="2025-11-24T07:00:36.561276960Z" level=info msg="Completed buildkit initialization" Nov 24 07:00:36.570929 dockerd[1771]: time="2025-11-24T07:00:36.570719758Z" level=info msg="Daemon has completed initialization" Nov 24 07:00:36.570929 dockerd[1771]: time="2025-11-24T07:00:36.570817895Z" level=info msg="API listen on /run/docker.sock" Nov 24 07:00:36.571406 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 24 07:00:37.542576 containerd[1499]: time="2025-11-24T07:00:37.542436811Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Nov 24 07:00:38.253616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount780100807.mount: Deactivated successfully. Nov 24 07:00:39.656405 containerd[1499]: time="2025-11-24T07:00:39.655449619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:00:39.656405 containerd[1499]: time="2025-11-24T07:00:39.656358669Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=29072183" Nov 24 07:00:39.656946 containerd[1499]: time="2025-11-24T07:00:39.656919549Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:00:39.659066 containerd[1499]: time="2025-11-24T07:00:39.659024935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:00:39.660561 containerd[1499]: time="2025-11-24T07:00:39.660509318Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 2.118028756s" Nov 24 07:00:39.660561 containerd[1499]: time="2025-11-24T07:00:39.660562310Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Nov 24 07:00:39.661455 containerd[1499]: time="2025-11-24T07:00:39.661311788Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Nov 24 07:00:41.078653 containerd[1499]: time="2025-11-24T07:00:41.078578480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:00:41.079963 containerd[1499]: time="2025-11-24T07:00:41.079919324Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24992010" Nov 24 07:00:41.080404 containerd[1499]: time="2025-11-24T07:00:41.080368646Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:00:41.083848 containerd[1499]: time="2025-11-24T07:00:41.083783918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:00:41.085964 containerd[1499]: time="2025-11-24T07:00:41.085884460Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 1.424536947s" Nov 24 07:00:41.085964 containerd[1499]: time="2025-11-24T07:00:41.085945231Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Nov 24 07:00:41.087065 containerd[1499]: time="2025-11-24T07:00:41.086940184Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Nov 24 07:00:42.363935 containerd[1499]: time="2025-11-24T07:00:42.362974578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:00:42.363935 containerd[1499]: time="2025-11-24T07:00:42.363849243Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19404248" Nov 24 07:00:42.364955 containerd[1499]: time="2025-11-24T07:00:42.364469810Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:00:42.367614 containerd[1499]: time="2025-11-24T07:00:42.367571625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:00:42.368615 containerd[1499]: time="2025-11-24T07:00:42.368571983Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 1.281590059s" Nov 24 07:00:42.368615 containerd[1499]: time="2025-11-24T07:00:42.368613229Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Nov 24 07:00:42.369117 containerd[1499]: time="2025-11-24T07:00:42.369087298Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Nov 24 07:00:42.370713 systemd-resolved[1376]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Nov 24 07:00:43.059631 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 24 07:00:43.066100 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 07:00:43.341005 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 07:00:43.358684 (kubelet)[2066]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 07:00:43.443169 kubelet[2066]: E1124 07:00:43.443113 2066 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 07:00:43.454190 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 07:00:43.455441 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 07:00:43.456270 systemd[1]: kubelet.service: Consumed 266ms CPU time, 110.6M memory peak. Nov 24 07:00:43.870164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2006367978.mount: Deactivated successfully. Nov 24 07:00:44.503716 containerd[1499]: time="2025-11-24T07:00:44.503656163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:00:44.504837 containerd[1499]: time="2025-11-24T07:00:44.504586046Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=31161423" Nov 24 07:00:44.506429 containerd[1499]: time="2025-11-24T07:00:44.506382548Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:00:44.508825 containerd[1499]: time="2025-11-24T07:00:44.508779406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:00:44.509418 containerd[1499]: time="2025-11-24T07:00:44.509384074Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 2.14025937s" Nov 24 07:00:44.509528 containerd[1499]: time="2025-11-24T07:00:44.509513415Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Nov 24 07:00:44.510259 containerd[1499]: time="2025-11-24T07:00:44.510229023Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 24 07:00:45.155524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3765104926.mount: Deactivated successfully. Nov 24 07:00:45.433079 systemd-resolved[1376]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Nov 24 07:00:46.020419 containerd[1499]: time="2025-11-24T07:00:46.020350248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:00:46.021698 containerd[1499]: time="2025-11-24T07:00:46.021636798Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 24 07:00:46.023030 containerd[1499]: time="2025-11-24T07:00:46.022988149Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:00:46.026771 containerd[1499]: time="2025-11-24T07:00:46.026726878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:00:46.028877 containerd[1499]: time="2025-11-24T07:00:46.028704322Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.518442799s" Nov 24 07:00:46.028877 containerd[1499]: time="2025-11-24T07:00:46.028752914Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 24 07:00:46.029602 containerd[1499]: time="2025-11-24T07:00:46.029472471Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 24 07:00:46.581434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1090198632.mount: Deactivated successfully. Nov 24 07:00:46.587017 containerd[1499]: time="2025-11-24T07:00:46.586939879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 07:00:46.587711 containerd[1499]: time="2025-11-24T07:00:46.587671290Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 24 07:00:46.588937 containerd[1499]: time="2025-11-24T07:00:46.588246919Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 07:00:46.591371 containerd[1499]: time="2025-11-24T07:00:46.591298076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 07:00:46.593925 containerd[1499]: time="2025-11-24T07:00:46.593319213Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 563.539506ms" Nov 24 07:00:46.593925 containerd[1499]: time="2025-11-24T07:00:46.593375860Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 24 07:00:46.595608 containerd[1499]: time="2025-11-24T07:00:46.595571574Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 24 07:00:47.188765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount234913942.mount: Deactivated successfully. Nov 24 07:00:48.903936 containerd[1499]: time="2025-11-24T07:00:48.903110097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:00:48.905917 containerd[1499]: time="2025-11-24T07:00:48.905854775Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 24 07:00:48.906609 containerd[1499]: time="2025-11-24T07:00:48.906538896Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:00:48.910925 containerd[1499]: time="2025-11-24T07:00:48.910143146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:00:48.912049 containerd[1499]: time="2025-11-24T07:00:48.912016770Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.316281194s" Nov 24 07:00:48.912215 containerd[1499]: time="2025-11-24T07:00:48.912196773Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 24 07:00:51.647663 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 07:00:51.647960 systemd[1]: kubelet.service: Consumed 266ms CPU time, 110.6M memory peak. Nov 24 07:00:51.651263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 07:00:51.691752 systemd[1]: Reload requested from client PID 2214 ('systemctl') (unit session-7.scope)... Nov 24 07:00:51.691772 systemd[1]: Reloading... Nov 24 07:00:51.859026 zram_generator::config[2259]: No configuration found. Nov 24 07:00:52.240560 systemd[1]: Reloading finished in 548 ms. Nov 24 07:00:52.303501 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 24 07:00:52.303595 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 24 07:00:52.304057 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 07:00:52.304130 systemd[1]: kubelet.service: Consumed 129ms CPU time, 98.4M memory peak. Nov 24 07:00:52.306423 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 07:00:52.550840 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 07:00:52.564427 (kubelet)[2311]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 07:00:52.643259 kubelet[2311]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 07:00:52.643259 kubelet[2311]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 07:00:52.643259 kubelet[2311]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 07:00:52.643736 kubelet[2311]: I1124 07:00:52.643397 2311 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 07:00:52.960202 kubelet[2311]: I1124 07:00:52.960040 2311 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 24 07:00:52.960202 kubelet[2311]: I1124 07:00:52.960099 2311 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 07:00:52.960970 kubelet[2311]: I1124 07:00:52.960930 2311 server.go:954] "Client rotation is on, will bootstrap in background" Nov 24 07:00:53.004104 kubelet[2311]: I1124 07:00:53.003284 2311 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 07:00:53.004308 kubelet[2311]: E1124 07:00:53.004077 2311 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://24.144.92.64:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 24.144.92.64:6443: connect: connection refused" logger="UnhandledError" Nov 24 07:00:53.014266 kubelet[2311]: I1124 07:00:53.014215 2311 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 07:00:53.019591 kubelet[2311]: I1124 07:00:53.019548 2311 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 24 07:00:53.022251 kubelet[2311]: I1124 07:00:53.022175 2311 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 07:00:53.022669 kubelet[2311]: I1124 07:00:53.022429 2311 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.1-b-419a632674","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 07:00:53.025809 kubelet[2311]: I1124 07:00:53.025757 2311 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 07:00:53.026412 kubelet[2311]: I1124 07:00:53.026097 2311 container_manager_linux.go:304] "Creating device plugin manager" Nov 24 07:00:53.030233 kubelet[2311]: I1124 07:00:53.030176 2311 state_mem.go:36] "Initialized new in-memory state store" Nov 24 07:00:53.037524 kubelet[2311]: I1124 07:00:53.037264 2311 kubelet.go:446] "Attempting to sync node with API server" Nov 24 07:00:53.037524 kubelet[2311]: I1124 07:00:53.037344 2311 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 07:00:53.037524 kubelet[2311]: I1124 07:00:53.037393 2311 kubelet.go:352] "Adding apiserver pod source" Nov 24 07:00:53.037524 kubelet[2311]: I1124 07:00:53.037409 2311 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 07:00:53.042220 kubelet[2311]: W1124 07:00:53.040987 2311 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://24.144.92.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.1-b-419a632674&limit=500&resourceVersion=0": dial tcp 24.144.92.64:6443: connect: connection refused Nov 24 07:00:53.042220 kubelet[2311]: E1124 07:00:53.041072 2311 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://24.144.92.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.1-b-419a632674&limit=500&resourceVersion=0\": dial tcp 24.144.92.64:6443: connect: connection refused" logger="UnhandledError" Nov 24 07:00:53.042220 kubelet[2311]: W1124 07:00:53.041520 2311 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://24.144.92.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 24.144.92.64:6443: connect: connection refused Nov 24 07:00:53.042220 kubelet[2311]: E1124 07:00:53.041562 2311 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://24.144.92.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 24.144.92.64:6443: connect: connection refused" logger="UnhandledError" Nov 24 07:00:53.043298 kubelet[2311]: I1124 07:00:53.043271 2311 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 07:00:53.046746 kubelet[2311]: I1124 07:00:53.046709 2311 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 24 07:00:53.049279 kubelet[2311]: W1124 07:00:53.049240 2311 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 24 07:00:53.054633 kubelet[2311]: I1124 07:00:53.054593 2311 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 24 07:00:53.054633 kubelet[2311]: I1124 07:00:53.054643 2311 server.go:1287] "Started kubelet" Nov 24 07:00:53.054975 kubelet[2311]: I1124 07:00:53.054930 2311 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 07:00:53.056941 kubelet[2311]: I1124 07:00:53.056910 2311 server.go:479] "Adding debug handlers to kubelet server" Nov 24 07:00:53.066586 kubelet[2311]: I1124 07:00:53.066476 2311 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 07:00:53.066891 kubelet[2311]: I1124 07:00:53.066868 2311 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 07:00:53.067307 kubelet[2311]: I1124 07:00:53.067288 2311 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 07:00:53.069404 kubelet[2311]: E1124 07:00:53.068202 2311 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://24.144.92.64:6443/api/v1/namespaces/default/events\": dial tcp 24.144.92.64:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.1-b-419a632674.187adf3c0b999cd3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.1-b-419a632674,UID:ci-4459.2.1-b-419a632674,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.1-b-419a632674,},FirstTimestamp:2025-11-24 07:00:53.054618835 +0000 UTC m=+0.480989754,LastTimestamp:2025-11-24 07:00:53.054618835 +0000 UTC m=+0.480989754,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.1-b-419a632674,}" Nov 24 07:00:53.070835 kubelet[2311]: I1124 07:00:53.069868 2311 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 07:00:53.072934 kubelet[2311]: E1124 07:00:53.072523 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.1-b-419a632674\" not found" Nov 24 07:00:53.072934 kubelet[2311]: I1124 07:00:53.072592 2311 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 24 07:00:53.072934 kubelet[2311]: I1124 07:00:53.072862 2311 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 24 07:00:53.073078 kubelet[2311]: I1124 07:00:53.072971 2311 reconciler.go:26] "Reconciler: start to sync state" Nov 24 07:00:53.074388 kubelet[2311]: W1124 07:00:53.073457 2311 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://24.144.92.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 24.144.92.64:6443: connect: connection refused Nov 24 07:00:53.074388 kubelet[2311]: E1124 07:00:53.073522 2311 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://24.144.92.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 24.144.92.64:6443: connect: connection refused" logger="UnhandledError" Nov 24 07:00:53.074388 kubelet[2311]: E1124 07:00:53.073805 2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.144.92.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.1-b-419a632674?timeout=10s\": dial tcp 24.144.92.64:6443: connect: connection refused" interval="200ms" Nov 24 07:00:53.083238 kubelet[2311]: I1124 07:00:53.083189 2311 factory.go:221] Registration of the systemd container factory successfully Nov 24 07:00:53.083393 kubelet[2311]: I1124 07:00:53.083321 2311 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 07:00:53.086386 kubelet[2311]: E1124 07:00:53.084823 2311 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 07:00:53.087381 kubelet[2311]: I1124 07:00:53.087356 2311 factory.go:221] Registration of the containerd container factory successfully Nov 24 07:00:53.107928 kubelet[2311]: I1124 07:00:53.107703 2311 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 24 07:00:53.109556 kubelet[2311]: I1124 07:00:53.109511 2311 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 24 07:00:53.112068 kubelet[2311]: I1124 07:00:53.111998 2311 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 24 07:00:53.112271 kubelet[2311]: I1124 07:00:53.112255 2311 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 07:00:53.112957 kubelet[2311]: I1124 07:00:53.112330 2311 kubelet.go:2382] "Starting kubelet main sync loop" Nov 24 07:00:53.112957 kubelet[2311]: E1124 07:00:53.112440 2311 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 07:00:53.116307 kubelet[2311]: W1124 07:00:53.116235 2311 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://24.144.92.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 24.144.92.64:6443: connect: connection refused Nov 24 07:00:53.116439 kubelet[2311]: E1124 07:00:53.116316 2311 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://24.144.92.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 24.144.92.64:6443: connect: connection refused" logger="UnhandledError" Nov 24 07:00:53.123254 kubelet[2311]: I1124 07:00:53.123222 2311 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 07:00:53.123254 kubelet[2311]: I1124 07:00:53.123240 2311 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 07:00:53.123254 kubelet[2311]: I1124 07:00:53.123258 2311 state_mem.go:36] "Initialized new in-memory state store" Nov 24 07:00:53.125379 kubelet[2311]: I1124 07:00:53.125335 2311 policy_none.go:49] "None policy: Start" Nov 24 07:00:53.125379 kubelet[2311]: I1124 07:00:53.125361 2311 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 24 07:00:53.125379 kubelet[2311]: I1124 07:00:53.125392 2311 state_mem.go:35] "Initializing new in-memory state store" Nov 24 07:00:53.133322 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 24 07:00:53.147392 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 24 07:00:53.151784 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 24 07:00:53.166399 kubelet[2311]: I1124 07:00:53.166307 2311 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 24 07:00:53.167156 kubelet[2311]: I1124 07:00:53.166525 2311 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 07:00:53.167156 kubelet[2311]: I1124 07:00:53.166540 2311 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 07:00:53.167156 kubelet[2311]: I1124 07:00:53.167133 2311 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 07:00:53.170929 kubelet[2311]: E1124 07:00:53.170681 2311 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 07:00:53.170929 kubelet[2311]: E1124 07:00:53.170823 2311 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.1-b-419a632674\" not found" Nov 24 07:00:53.224879 systemd[1]: Created slice kubepods-burstable-pod26cf8d0a4efb18e8f260d452f7c70cda.slice - libcontainer container kubepods-burstable-pod26cf8d0a4efb18e8f260d452f7c70cda.slice. Nov 24 07:00:53.239850 kubelet[2311]: E1124 07:00:53.239802 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-b-419a632674\" not found" node="ci-4459.2.1-b-419a632674" Nov 24 07:00:53.244227 systemd[1]: Created slice kubepods-burstable-pod618185baf9174d216692a62b59b07842.slice - libcontainer container kubepods-burstable-pod618185baf9174d216692a62b59b07842.slice. Nov 24 07:00:53.247070 kubelet[2311]: E1124 07:00:53.247036 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-b-419a632674\" not found" node="ci-4459.2.1-b-419a632674" Nov 24 07:00:53.251232 systemd[1]: Created slice kubepods-burstable-pod05c2942f1da410ac8ac77e46dce6ca81.slice - libcontainer container kubepods-burstable-pod05c2942f1da410ac8ac77e46dce6ca81.slice. Nov 24 07:00:53.253554 kubelet[2311]: E1124 07:00:53.253526 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-b-419a632674\" not found" node="ci-4459.2.1-b-419a632674" Nov 24 07:00:53.268110 kubelet[2311]: I1124 07:00:53.268071 2311 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-b-419a632674" Nov 24 07:00:53.268652 kubelet[2311]: E1124 07:00:53.268605 2311 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://24.144.92.64:6443/api/v1/nodes\": dial tcp 24.144.92.64:6443: connect: connection refused" node="ci-4459.2.1-b-419a632674" Nov 24 07:00:53.274584 kubelet[2311]: E1124 07:00:53.274530 2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.144.92.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.1-b-419a632674?timeout=10s\": dial tcp 24.144.92.64:6443: connect: connection refused" interval="400ms" Nov 24 07:00:53.274883 kubelet[2311]: I1124 07:00:53.274803 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/26cf8d0a4efb18e8f260d452f7c70cda-k8s-certs\") pod \"kube-apiserver-ci-4459.2.1-b-419a632674\" (UID: \"26cf8d0a4efb18e8f260d452f7c70cda\") " pod="kube-system/kube-apiserver-ci-4459.2.1-b-419a632674" Nov 24 07:00:53.274883 kubelet[2311]: I1124 07:00:53.274829 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/618185baf9174d216692a62b59b07842-ca-certs\") pod \"kube-controller-manager-ci-4459.2.1-b-419a632674\" (UID: \"618185baf9174d216692a62b59b07842\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-b-419a632674" Nov 24 07:00:53.274883 kubelet[2311]: I1124 07:00:53.274864 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/618185baf9174d216692a62b59b07842-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.1-b-419a632674\" (UID: \"618185baf9174d216692a62b59b07842\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-b-419a632674" Nov 24 07:00:53.275050 kubelet[2311]: I1124 07:00:53.274944 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/618185baf9174d216692a62b59b07842-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.1-b-419a632674\" (UID: \"618185baf9174d216692a62b59b07842\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-b-419a632674" Nov 24 07:00:53.275050 kubelet[2311]: I1124 07:00:53.275008 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/618185baf9174d216692a62b59b07842-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.1-b-419a632674\" (UID: \"618185baf9174d216692a62b59b07842\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-b-419a632674" Nov 24 07:00:53.275050 kubelet[2311]: I1124 07:00:53.275029 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05c2942f1da410ac8ac77e46dce6ca81-kubeconfig\") pod \"kube-scheduler-ci-4459.2.1-b-419a632674\" (UID: \"05c2942f1da410ac8ac77e46dce6ca81\") " pod="kube-system/kube-scheduler-ci-4459.2.1-b-419a632674" Nov 24 07:00:53.275127 kubelet[2311]: I1124 07:00:53.275067 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/26cf8d0a4efb18e8f260d452f7c70cda-ca-certs\") pod \"kube-apiserver-ci-4459.2.1-b-419a632674\" (UID: \"26cf8d0a4efb18e8f260d452f7c70cda\") " pod="kube-system/kube-apiserver-ci-4459.2.1-b-419a632674" Nov 24 07:00:53.275127 kubelet[2311]: I1124 07:00:53.275093 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/26cf8d0a4efb18e8f260d452f7c70cda-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.1-b-419a632674\" (UID: \"26cf8d0a4efb18e8f260d452f7c70cda\") " pod="kube-system/kube-apiserver-ci-4459.2.1-b-419a632674" Nov 24 07:00:53.275127 kubelet[2311]: I1124 07:00:53.275115 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/618185baf9174d216692a62b59b07842-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.1-b-419a632674\" (UID: \"618185baf9174d216692a62b59b07842\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-b-419a632674" Nov 24 07:00:53.470006 kubelet[2311]: I1124 07:00:53.469881 2311 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-b-419a632674" Nov 24 07:00:53.470640 kubelet[2311]: E1124 07:00:53.470607 2311 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://24.144.92.64:6443/api/v1/nodes\": dial tcp 24.144.92.64:6443: connect: connection refused" node="ci-4459.2.1-b-419a632674" Nov 24 07:00:53.542396 kubelet[2311]: E1124 07:00:53.541765 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:00:53.543958 containerd[1499]: time="2025-11-24T07:00:53.543878451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.1-b-419a632674,Uid:26cf8d0a4efb18e8f260d452f7c70cda,Namespace:kube-system,Attempt:0,}" Nov 24 07:00:53.548156 kubelet[2311]: E1124 07:00:53.548030 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:00:53.553055 containerd[1499]: time="2025-11-24T07:00:53.552558108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.1-b-419a632674,Uid:618185baf9174d216692a62b59b07842,Namespace:kube-system,Attempt:0,}" Nov 24 07:00:53.554803 kubelet[2311]: E1124 07:00:53.554763 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:00:53.555949 containerd[1499]: time="2025-11-24T07:00:53.555921101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.1-b-419a632674,Uid:05c2942f1da410ac8ac77e46dce6ca81,Namespace:kube-system,Attempt:0,}" Nov 24 07:00:53.660846 containerd[1499]: time="2025-11-24T07:00:53.660574632Z" level=info msg="connecting to shim 38e38803f6cefef71f998f765999b2356810c211008b08a7161855e8dbd32d23" address="unix:///run/containerd/s/b67ae8985f6abbf504b1415134a4f85729ce2faf949f123162b19c9e1c80cff9" namespace=k8s.io protocol=ttrpc version=3 Nov 24 07:00:53.668563 containerd[1499]: time="2025-11-24T07:00:53.668076865Z" level=info msg="connecting to shim 254b020f8d736f3686c58c1339d625558ea24b8aa6cf31957d7f5206521ff483" address="unix:///run/containerd/s/845d3da8def5e770de01cdb4c7bc19bc4682e3526e757ffc7a55759fda10e528" namespace=k8s.io protocol=ttrpc version=3 Nov 24 07:00:53.675707 kubelet[2311]: E1124 07:00:53.675650 2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.144.92.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.1-b-419a632674?timeout=10s\": dial tcp 24.144.92.64:6443: connect: connection refused" interval="800ms" Nov 24 07:00:53.681176 containerd[1499]: time="2025-11-24T07:00:53.681102897Z" level=info msg="connecting to shim bce2f93040c11ecb7f2b927bb9dbcf7b794dc37ce6ec6b225b0c713348b6a0bc" address="unix:///run/containerd/s/bddbb21e8df825cd38b1c6483ec502c027629e2b55c3bb6bc001c9357ffa2dec" namespace=k8s.io protocol=ttrpc version=3 Nov 24 07:00:53.786307 systemd[1]: Started cri-containerd-38e38803f6cefef71f998f765999b2356810c211008b08a7161855e8dbd32d23.scope - libcontainer container 38e38803f6cefef71f998f765999b2356810c211008b08a7161855e8dbd32d23. Nov 24 07:00:53.793141 systemd[1]: Started cri-containerd-254b020f8d736f3686c58c1339d625558ea24b8aa6cf31957d7f5206521ff483.scope - libcontainer container 254b020f8d736f3686c58c1339d625558ea24b8aa6cf31957d7f5206521ff483. Nov 24 07:00:53.796076 systemd[1]: Started cri-containerd-bce2f93040c11ecb7f2b927bb9dbcf7b794dc37ce6ec6b225b0c713348b6a0bc.scope - libcontainer container bce2f93040c11ecb7f2b927bb9dbcf7b794dc37ce6ec6b225b0c713348b6a0bc. Nov 24 07:00:53.880156 kubelet[2311]: I1124 07:00:53.880102 2311 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-b-419a632674" Nov 24 07:00:53.880571 kubelet[2311]: E1124 07:00:53.880518 2311 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://24.144.92.64:6443/api/v1/nodes\": dial tcp 24.144.92.64:6443: connect: connection refused" node="ci-4459.2.1-b-419a632674" Nov 24 07:00:53.913825 containerd[1499]: time="2025-11-24T07:00:53.913773356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.1-b-419a632674,Uid:618185baf9174d216692a62b59b07842,Namespace:kube-system,Attempt:0,} returns sandbox id \"38e38803f6cefef71f998f765999b2356810c211008b08a7161855e8dbd32d23\"" Nov 24 07:00:53.916039 containerd[1499]: time="2025-11-24T07:00:53.915991769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.1-b-419a632674,Uid:26cf8d0a4efb18e8f260d452f7c70cda,Namespace:kube-system,Attempt:0,} returns sandbox id \"254b020f8d736f3686c58c1339d625558ea24b8aa6cf31957d7f5206521ff483\"" Nov 24 07:00:53.917605 kubelet[2311]: E1124 07:00:53.917566 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:00:53.918700 kubelet[2311]: E1124 07:00:53.918581 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:00:53.922972 containerd[1499]: time="2025-11-24T07:00:53.921707019Z" level=info msg="CreateContainer within sandbox \"38e38803f6cefef71f998f765999b2356810c211008b08a7161855e8dbd32d23\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 24 07:00:53.925688 containerd[1499]: time="2025-11-24T07:00:53.925643553Z" level=info msg="CreateContainer within sandbox \"254b020f8d736f3686c58c1339d625558ea24b8aa6cf31957d7f5206521ff483\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 24 07:00:53.938921 containerd[1499]: time="2025-11-24T07:00:53.938715198Z" level=info msg="Container cae626154270d74460755fabe72000f767f8fae17087c1b839871185f487b3cc: CDI devices from CRI Config.CDIDevices: []" Nov 24 07:00:53.939674 containerd[1499]: time="2025-11-24T07:00:53.939620554Z" level=info msg="Container 5f7d35ddc7657922f3ec4a22e4968532802729c4dc36442086c4d6e868c1c05c: CDI devices from CRI Config.CDIDevices: []" Nov 24 07:00:53.950593 containerd[1499]: time="2025-11-24T07:00:53.950535524Z" level=info msg="CreateContainer within sandbox \"38e38803f6cefef71f998f765999b2356810c211008b08a7161855e8dbd32d23\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5f7d35ddc7657922f3ec4a22e4968532802729c4dc36442086c4d6e868c1c05c\"" Nov 24 07:00:53.952790 containerd[1499]: time="2025-11-24T07:00:53.952722450Z" level=info msg="CreateContainer within sandbox \"254b020f8d736f3686c58c1339d625558ea24b8aa6cf31957d7f5206521ff483\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cae626154270d74460755fabe72000f767f8fae17087c1b839871185f487b3cc\"" Nov 24 07:00:53.953171 containerd[1499]: time="2025-11-24T07:00:53.953142223Z" level=info msg="StartContainer for \"5f7d35ddc7657922f3ec4a22e4968532802729c4dc36442086c4d6e868c1c05c\"" Nov 24 07:00:53.953543 containerd[1499]: time="2025-11-24T07:00:53.953518052Z" level=info msg="StartContainer for \"cae626154270d74460755fabe72000f767f8fae17087c1b839871185f487b3cc\"" Nov 24 07:00:53.954768 containerd[1499]: time="2025-11-24T07:00:53.954734238Z" level=info msg="connecting to shim cae626154270d74460755fabe72000f767f8fae17087c1b839871185f487b3cc" address="unix:///run/containerd/s/845d3da8def5e770de01cdb4c7bc19bc4682e3526e757ffc7a55759fda10e528" protocol=ttrpc version=3 Nov 24 07:00:53.957786 containerd[1499]: time="2025-11-24T07:00:53.957466305Z" level=info msg="connecting to shim 5f7d35ddc7657922f3ec4a22e4968532802729c4dc36442086c4d6e868c1c05c" address="unix:///run/containerd/s/b67ae8985f6abbf504b1415134a4f85729ce2faf949f123162b19c9e1c80cff9" protocol=ttrpc version=3 Nov 24 07:00:53.963150 containerd[1499]: time="2025-11-24T07:00:53.963106255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.1-b-419a632674,Uid:05c2942f1da410ac8ac77e46dce6ca81,Namespace:kube-system,Attempt:0,} returns sandbox id \"bce2f93040c11ecb7f2b927bb9dbcf7b794dc37ce6ec6b225b0c713348b6a0bc\"" Nov 24 07:00:53.964957 kubelet[2311]: E1124 07:00:53.964750 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:00:53.967882 containerd[1499]: time="2025-11-24T07:00:53.967827427Z" level=info msg="CreateContainer within sandbox \"bce2f93040c11ecb7f2b927bb9dbcf7b794dc37ce6ec6b225b0c713348b6a0bc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 24 07:00:53.979889 containerd[1499]: time="2025-11-24T07:00:53.979851439Z" level=info msg="Container ce355072ffdc38b7b6915237c53965dcb8ca1903b93eaf92049f44dbbdd1da30: CDI devices from CRI Config.CDIDevices: []" Nov 24 07:00:53.983314 kubelet[2311]: W1124 07:00:53.983251 2311 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://24.144.92.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 24.144.92.64:6443: connect: connection refused Nov 24 07:00:53.983725 kubelet[2311]: E1124 07:00:53.983550 2311 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://24.144.92.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 24.144.92.64:6443: connect: connection refused" logger="UnhandledError" Nov 24 07:00:53.984203 systemd[1]: Started cri-containerd-cae626154270d74460755fabe72000f767f8fae17087c1b839871185f487b3cc.scope - libcontainer container cae626154270d74460755fabe72000f767f8fae17087c1b839871185f487b3cc. Nov 24 07:00:53.999871 containerd[1499]: time="2025-11-24T07:00:53.998957961Z" level=info msg="CreateContainer within sandbox \"bce2f93040c11ecb7f2b927bb9dbcf7b794dc37ce6ec6b225b0c713348b6a0bc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ce355072ffdc38b7b6915237c53965dcb8ca1903b93eaf92049f44dbbdd1da30\"" Nov 24 07:00:53.999871 containerd[1499]: time="2025-11-24T07:00:53.999642411Z" level=info msg="StartContainer for \"ce355072ffdc38b7b6915237c53965dcb8ca1903b93eaf92049f44dbbdd1da30\"" Nov 24 07:00:54.002363 containerd[1499]: time="2025-11-24T07:00:54.001360039Z" level=info msg="connecting to shim ce355072ffdc38b7b6915237c53965dcb8ca1903b93eaf92049f44dbbdd1da30" address="unix:///run/containerd/s/bddbb21e8df825cd38b1c6483ec502c027629e2b55c3bb6bc001c9357ffa2dec" protocol=ttrpc version=3 Nov 24 07:00:54.009210 systemd[1]: Started cri-containerd-5f7d35ddc7657922f3ec4a22e4968532802729c4dc36442086c4d6e868c1c05c.scope - libcontainer container 5f7d35ddc7657922f3ec4a22e4968532802729c4dc36442086c4d6e868c1c05c. Nov 24 07:00:54.037223 systemd[1]: Started cri-containerd-ce355072ffdc38b7b6915237c53965dcb8ca1903b93eaf92049f44dbbdd1da30.scope - libcontainer container ce355072ffdc38b7b6915237c53965dcb8ca1903b93eaf92049f44dbbdd1da30. Nov 24 07:00:54.080375 containerd[1499]: time="2025-11-24T07:00:54.080200676Z" level=info msg="StartContainer for \"cae626154270d74460755fabe72000f767f8fae17087c1b839871185f487b3cc\" returns successfully" Nov 24 07:00:54.123855 containerd[1499]: time="2025-11-24T07:00:54.123521980Z" level=info msg="StartContainer for \"5f7d35ddc7657922f3ec4a22e4968532802729c4dc36442086c4d6e868c1c05c\" returns successfully" Nov 24 07:00:54.134655 kubelet[2311]: E1124 07:00:54.134336 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-b-419a632674\" not found" node="ci-4459.2.1-b-419a632674" Nov 24 07:00:54.134655 kubelet[2311]: E1124 07:00:54.134550 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:00:54.142577 kubelet[2311]: E1124 07:00:54.142534 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-b-419a632674\" not found" node="ci-4459.2.1-b-419a632674" Nov 24 07:00:54.143188 kubelet[2311]: E1124 07:00:54.142739 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:00:54.200758 containerd[1499]: time="2025-11-24T07:00:54.200708869Z" level=info msg="StartContainer for \"ce355072ffdc38b7b6915237c53965dcb8ca1903b93eaf92049f44dbbdd1da30\" returns successfully" Nov 24 07:00:54.682749 kubelet[2311]: I1124 07:00:54.682245 2311 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-b-419a632674" Nov 24 07:00:55.153991 kubelet[2311]: E1124 07:00:55.153033 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-b-419a632674\" not found" node="ci-4459.2.1-b-419a632674" Nov 24 07:00:55.153991 kubelet[2311]: E1124 07:00:55.153232 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:00:55.153991 kubelet[2311]: E1124 07:00:55.153649 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-b-419a632674\" not found" node="ci-4459.2.1-b-419a632674" Nov 24 07:00:55.153991 kubelet[2311]: E1124 07:00:55.153791 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:00:55.995971 kubelet[2311]: E1124 07:00:55.994828 2311 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.1-b-419a632674\" not found" node="ci-4459.2.1-b-419a632674" Nov 24 07:00:56.043817 kubelet[2311]: I1124 07:00:56.043487 2311 apiserver.go:52] "Watching apiserver" Nov 24 07:00:56.073953 kubelet[2311]: I1124 07:00:56.073873 2311 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 24 07:00:56.155265 kubelet[2311]: E1124 07:00:56.155222 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-b-419a632674\" not found" node="ci-4459.2.1-b-419a632674" Nov 24 07:00:56.155465 kubelet[2311]: E1124 07:00:56.155392 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:00:56.195606 kubelet[2311]: I1124 07:00:56.195553 2311 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.1-b-419a632674" Nov 24 07:00:56.274011 kubelet[2311]: I1124 07:00:56.273817 2311 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.1-b-419a632674" Nov 24 07:00:56.292152 kubelet[2311]: E1124 07:00:56.292071 2311 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.1-b-419a632674\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.1-b-419a632674" Nov 24 07:00:56.292152 kubelet[2311]: I1124 07:00:56.292146 2311 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.1-b-419a632674" Nov 24 07:00:56.298488 kubelet[2311]: E1124 07:00:56.298426 2311 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.1-b-419a632674\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.1-b-419a632674" Nov 24 07:00:56.298488 kubelet[2311]: I1124 07:00:56.298468 2311 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.1-b-419a632674" Nov 24 07:00:56.304604 kubelet[2311]: E1124 07:00:56.304541 2311 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.1-b-419a632674\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.1-b-419a632674" Nov 24 07:00:57.156417 kubelet[2311]: I1124 07:00:57.156103 2311 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.1-b-419a632674" Nov 24 07:00:57.166073 kubelet[2311]: W1124 07:00:57.165915 2311 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 24 07:00:57.166876 kubelet[2311]: E1124 07:00:57.166806 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:00:58.159672 kubelet[2311]: E1124 07:00:58.159504 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:00:58.541856 systemd[1]: Reload requested from client PID 2580 ('systemctl') (unit session-7.scope)... Nov 24 07:00:58.542401 systemd[1]: Reloading... Nov 24 07:00:58.686236 zram_generator::config[2623]: No configuration found. Nov 24 07:00:59.060161 systemd[1]: Reloading finished in 517 ms. Nov 24 07:00:59.090475 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 07:00:59.110220 systemd[1]: kubelet.service: Deactivated successfully. Nov 24 07:00:59.110934 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 07:00:59.111247 systemd[1]: kubelet.service: Consumed 1.046s CPU time, 127M memory peak. Nov 24 07:00:59.115363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 07:00:59.309694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 07:00:59.326558 (kubelet)[2674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 07:00:59.423317 kubelet[2674]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 07:00:59.423958 kubelet[2674]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 07:00:59.423958 kubelet[2674]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 07:00:59.424138 kubelet[2674]: I1124 07:00:59.424089 2674 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 07:00:59.436957 kubelet[2674]: I1124 07:00:59.436863 2674 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 24 07:00:59.436957 kubelet[2674]: I1124 07:00:59.436950 2674 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 07:00:59.437382 kubelet[2674]: I1124 07:00:59.437361 2674 server.go:954] "Client rotation is on, will bootstrap in background" Nov 24 07:00:59.438986 kubelet[2674]: I1124 07:00:59.438949 2674 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 24 07:00:59.441776 kubelet[2674]: I1124 07:00:59.441733 2674 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 07:00:59.449240 kubelet[2674]: I1124 07:00:59.449186 2674 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 07:00:59.452967 kubelet[2674]: I1124 07:00:59.452938 2674 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 24 07:00:59.453204 kubelet[2674]: I1124 07:00:59.453151 2674 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 07:00:59.453418 kubelet[2674]: I1124 07:00:59.453203 2674 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.1-b-419a632674","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 07:00:59.453528 kubelet[2674]: I1124 07:00:59.453430 2674 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 07:00:59.453528 kubelet[2674]: I1124 07:00:59.453441 2674 container_manager_linux.go:304] "Creating device plugin manager" Nov 24 07:00:59.453528 kubelet[2674]: I1124 07:00:59.453493 2674 state_mem.go:36] "Initialized new in-memory state store" Nov 24 07:00:59.453652 kubelet[2674]: I1124 07:00:59.453641 2674 kubelet.go:446] "Attempting to sync node with API server" Nov 24 07:00:59.453689 kubelet[2674]: I1124 07:00:59.453666 2674 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 07:00:59.453689 kubelet[2674]: I1124 07:00:59.453687 2674 kubelet.go:352] "Adding apiserver pod source" Nov 24 07:00:59.453752 kubelet[2674]: I1124 07:00:59.453697 2674 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 07:00:59.469714 kubelet[2674]: I1124 07:00:59.469334 2674 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 07:00:59.474257 kubelet[2674]: I1124 07:00:59.474211 2674 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 24 07:00:59.475821 kubelet[2674]: I1124 07:00:59.475251 2674 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 24 07:00:59.476297 kubelet[2674]: I1124 07:00:59.476198 2674 server.go:1287] "Started kubelet" Nov 24 07:00:59.482998 kubelet[2674]: I1124 07:00:59.482953 2674 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 07:00:59.488487 kubelet[2674]: I1124 07:00:59.488411 2674 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 07:00:59.491771 kubelet[2674]: I1124 07:00:59.491524 2674 server.go:479] "Adding debug handlers to kubelet server" Nov 24 07:00:59.494682 kubelet[2674]: I1124 07:00:59.494613 2674 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 07:00:59.494874 kubelet[2674]: I1124 07:00:59.494860 2674 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 07:00:59.495193 kubelet[2674]: I1124 07:00:59.495171 2674 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 07:00:59.497502 kubelet[2674]: I1124 07:00:59.497319 2674 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 24 07:00:59.499049 kubelet[2674]: I1124 07:00:59.499009 2674 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 24 07:00:59.500052 kubelet[2674]: I1124 07:00:59.499870 2674 reconciler.go:26] "Reconciler: start to sync state" Nov 24 07:00:59.505606 kubelet[2674]: I1124 07:00:59.505051 2674 factory.go:221] Registration of the systemd container factory successfully Nov 24 07:00:59.506356 kubelet[2674]: I1124 07:00:59.505968 2674 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 07:00:59.507970 kubelet[2674]: E1124 07:00:59.507810 2674 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 07:00:59.515561 kubelet[2674]: I1124 07:00:59.515530 2674 factory.go:221] Registration of the containerd container factory successfully Nov 24 07:00:59.520394 kubelet[2674]: I1124 07:00:59.520337 2674 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 24 07:00:59.526629 kubelet[2674]: I1124 07:00:59.526478 2674 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 24 07:00:59.526629 kubelet[2674]: I1124 07:00:59.526524 2674 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 24 07:00:59.526629 kubelet[2674]: I1124 07:00:59.526551 2674 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 07:00:59.526629 kubelet[2674]: I1124 07:00:59.526558 2674 kubelet.go:2382] "Starting kubelet main sync loop" Nov 24 07:00:59.526629 kubelet[2674]: E1124 07:00:59.526610 2674 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 07:00:59.613796 kubelet[2674]: I1124 07:00:59.609311 2674 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 07:00:59.613796 kubelet[2674]: I1124 07:00:59.610210 2674 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 07:00:59.613796 kubelet[2674]: I1124 07:00:59.610247 2674 state_mem.go:36] "Initialized new in-memory state store" Nov 24 07:00:59.613796 kubelet[2674]: I1124 07:00:59.610440 2674 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 24 07:00:59.613796 kubelet[2674]: I1124 07:00:59.610451 2674 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 24 07:00:59.613796 kubelet[2674]: I1124 07:00:59.610471 2674 policy_none.go:49] "None policy: Start" Nov 24 07:00:59.613796 kubelet[2674]: I1124 07:00:59.610484 2674 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 24 07:00:59.613796 kubelet[2674]: I1124 07:00:59.610493 2674 state_mem.go:35] "Initializing new in-memory state store" Nov 24 07:00:59.613796 kubelet[2674]: I1124 07:00:59.610590 2674 state_mem.go:75] "Updated machine memory state" Nov 24 07:00:59.617250 kubelet[2674]: I1124 07:00:59.617223 2674 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 24 07:00:59.617418 kubelet[2674]: I1124 07:00:59.617406 2674 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 07:00:59.617484 kubelet[2674]: I1124 07:00:59.617420 2674 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 07:00:59.618852 kubelet[2674]: I1124 07:00:59.618657 2674 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 07:00:59.623536 kubelet[2674]: E1124 07:00:59.623496 2674 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 07:00:59.639740 kubelet[2674]: I1124 07:00:59.639714 2674 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.1-b-419a632674" Nov 24 07:00:59.640235 kubelet[2674]: I1124 07:00:59.640218 2674 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.1-b-419a632674" Nov 24 07:00:59.641204 kubelet[2674]: I1124 07:00:59.641029 2674 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.1-b-419a632674" Nov 24 07:00:59.663636 kubelet[2674]: W1124 07:00:59.663593 2674 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 24 07:00:59.665451 kubelet[2674]: W1124 07:00:59.664167 2674 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 24 07:00:59.666041 kubelet[2674]: W1124 07:00:59.665974 2674 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 24 07:00:59.666145 kubelet[2674]: E1124 07:00:59.666075 2674 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.1-b-419a632674\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.1-b-419a632674" Nov 24 07:00:59.705634 kubelet[2674]: I1124 07:00:59.705353 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/618185baf9174d216692a62b59b07842-ca-certs\") pod \"kube-controller-manager-ci-4459.2.1-b-419a632674\" (UID: \"618185baf9174d216692a62b59b07842\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-b-419a632674" Nov 24 07:00:59.705634 kubelet[2674]: I1124 07:00:59.705434 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05c2942f1da410ac8ac77e46dce6ca81-kubeconfig\") pod \"kube-scheduler-ci-4459.2.1-b-419a632674\" (UID: \"05c2942f1da410ac8ac77e46dce6ca81\") " pod="kube-system/kube-scheduler-ci-4459.2.1-b-419a632674" Nov 24 07:00:59.705634 kubelet[2674]: I1124 07:00:59.705458 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/26cf8d0a4efb18e8f260d452f7c70cda-ca-certs\") pod \"kube-apiserver-ci-4459.2.1-b-419a632674\" (UID: \"26cf8d0a4efb18e8f260d452f7c70cda\") " pod="kube-system/kube-apiserver-ci-4459.2.1-b-419a632674" Nov 24 07:00:59.705634 kubelet[2674]: I1124 07:00:59.705476 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/26cf8d0a4efb18e8f260d452f7c70cda-k8s-certs\") pod \"kube-apiserver-ci-4459.2.1-b-419a632674\" (UID: \"26cf8d0a4efb18e8f260d452f7c70cda\") " pod="kube-system/kube-apiserver-ci-4459.2.1-b-419a632674" Nov 24 07:00:59.705634 kubelet[2674]: I1124 07:00:59.705495 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/618185baf9174d216692a62b59b07842-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.1-b-419a632674\" (UID: \"618185baf9174d216692a62b59b07842\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-b-419a632674" Nov 24 07:00:59.706042 kubelet[2674]: I1124 07:00:59.705524 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/618185baf9174d216692a62b59b07842-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.1-b-419a632674\" (UID: \"618185baf9174d216692a62b59b07842\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-b-419a632674" Nov 24 07:00:59.706042 kubelet[2674]: I1124 07:00:59.705548 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/618185baf9174d216692a62b59b07842-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.1-b-419a632674\" (UID: \"618185baf9174d216692a62b59b07842\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-b-419a632674" Nov 24 07:00:59.706042 kubelet[2674]: I1124 07:00:59.705578 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/618185baf9174d216692a62b59b07842-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.1-b-419a632674\" (UID: \"618185baf9174d216692a62b59b07842\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-b-419a632674" Nov 24 07:00:59.706042 kubelet[2674]: I1124 07:00:59.705606 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/26cf8d0a4efb18e8f260d452f7c70cda-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.1-b-419a632674\" (UID: \"26cf8d0a4efb18e8f260d452f7c70cda\") " pod="kube-system/kube-apiserver-ci-4459.2.1-b-419a632674" Nov 24 07:00:59.727703 kubelet[2674]: I1124 07:00:59.727041 2674 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-b-419a632674" Nov 24 07:00:59.741435 kubelet[2674]: I1124 07:00:59.741060 2674 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.1-b-419a632674" Nov 24 07:00:59.742326 kubelet[2674]: I1124 07:00:59.741838 2674 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.1-b-419a632674" Nov 24 07:00:59.965410 kubelet[2674]: E1124 07:00:59.964669 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:00:59.965410 kubelet[2674]: E1124 07:00:59.965127 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:00:59.966714 kubelet[2674]: E1124 07:00:59.966689 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:00.461350 kubelet[2674]: I1124 07:01:00.461013 2674 apiserver.go:52] "Watching apiserver" Nov 24 07:01:00.500559 kubelet[2674]: I1124 07:01:00.500412 2674 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 24 07:01:00.587267 kubelet[2674]: E1124 07:01:00.585184 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:00.587267 kubelet[2674]: I1124 07:01:00.585975 2674 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.1-b-419a632674" Nov 24 07:01:00.587267 kubelet[2674]: E1124 07:01:00.586465 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:00.597067 kubelet[2674]: W1124 07:01:00.597031 2674 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 24 07:01:00.597254 kubelet[2674]: E1124 07:01:00.597097 2674 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.1-b-419a632674\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.1-b-419a632674" Nov 24 07:01:00.597254 kubelet[2674]: E1124 07:01:00.597247 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:00.648297 kubelet[2674]: I1124 07:01:00.648238 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.1-b-419a632674" podStartSLOduration=1.648216473 podStartE2EDuration="1.648216473s" podCreationTimestamp="2025-11-24 07:00:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 07:01:00.633673901 +0000 UTC m=+1.297838745" watchObservedRunningTime="2025-11-24 07:01:00.648216473 +0000 UTC m=+1.312381309" Nov 24 07:01:00.648494 kubelet[2674]: I1124 07:01:00.648362 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.1-b-419a632674" podStartSLOduration=3.648357175 podStartE2EDuration="3.648357175s" podCreationTimestamp="2025-11-24 07:00:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 07:01:00.648014875 +0000 UTC m=+1.312179720" watchObservedRunningTime="2025-11-24 07:01:00.648357175 +0000 UTC m=+1.312522020" Nov 24 07:01:00.681201 kubelet[2674]: I1124 07:01:00.681115 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.1-b-419a632674" podStartSLOduration=1.6810910639999999 podStartE2EDuration="1.681091064s" podCreationTimestamp="2025-11-24 07:00:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 07:01:00.66448239 +0000 UTC m=+1.328647250" watchObservedRunningTime="2025-11-24 07:01:00.681091064 +0000 UTC m=+1.345255900" Nov 24 07:01:01.587594 kubelet[2674]: E1124 07:01:01.587409 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:01.587594 kubelet[2674]: E1124 07:01:01.587498 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:02.630858 kubelet[2674]: E1124 07:01:02.630808 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:02.974617 kubelet[2674]: I1124 07:01:02.974508 2674 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 24 07:01:02.975710 containerd[1499]: time="2025-11-24T07:01:02.975639273Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 24 07:01:02.976259 kubelet[2674]: I1124 07:01:02.975875 2674 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 24 07:01:03.522508 kubelet[2674]: E1124 07:01:03.522432 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:03.710295 systemd[1]: Created slice kubepods-besteffort-podc640a98a_0977_42f4_a76e_f54d3f392951.slice - libcontainer container kubepods-besteffort-podc640a98a_0977_42f4_a76e_f54d3f392951.slice. Nov 24 07:01:03.740494 kubelet[2674]: I1124 07:01:03.740431 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c640a98a-0977-42f4-a76e-f54d3f392951-kube-proxy\") pod \"kube-proxy-qz6fp\" (UID: \"c640a98a-0977-42f4-a76e-f54d3f392951\") " pod="kube-system/kube-proxy-qz6fp" Nov 24 07:01:03.740494 kubelet[2674]: I1124 07:01:03.740495 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c640a98a-0977-42f4-a76e-f54d3f392951-xtables-lock\") pod \"kube-proxy-qz6fp\" (UID: \"c640a98a-0977-42f4-a76e-f54d3f392951\") " pod="kube-system/kube-proxy-qz6fp" Nov 24 07:01:03.741225 kubelet[2674]: I1124 07:01:03.740527 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t6vj\" (UniqueName: \"kubernetes.io/projected/c640a98a-0977-42f4-a76e-f54d3f392951-kube-api-access-5t6vj\") pod \"kube-proxy-qz6fp\" (UID: \"c640a98a-0977-42f4-a76e-f54d3f392951\") " pod="kube-system/kube-proxy-qz6fp" Nov 24 07:01:03.741225 kubelet[2674]: I1124 07:01:03.740566 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c640a98a-0977-42f4-a76e-f54d3f392951-lib-modules\") pod \"kube-proxy-qz6fp\" (UID: \"c640a98a-0977-42f4-a76e-f54d3f392951\") " pod="kube-system/kube-proxy-qz6fp" Nov 24 07:01:04.021436 kubelet[2674]: E1124 07:01:04.021390 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:04.023554 containerd[1499]: time="2025-11-24T07:01:04.023412087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qz6fp,Uid:c640a98a-0977-42f4-a76e-f54d3f392951,Namespace:kube-system,Attempt:0,}" Nov 24 07:01:04.046178 systemd[1]: Created slice kubepods-besteffort-pod4d9fde8d_2400_4ee8_8e69_b787b4b5e420.slice - libcontainer container kubepods-besteffort-pod4d9fde8d_2400_4ee8_8e69_b787b4b5e420.slice. Nov 24 07:01:04.071221 containerd[1499]: time="2025-11-24T07:01:04.071168874Z" level=info msg="connecting to shim f2ab7011d58288cb6ac7245958fd266f17ae5e6c128aec9d6a394eda3a9eba63" address="unix:///run/containerd/s/54dd3cc4aa5bcfd06c2e546ee92d3b194ca8c0f3694c148bef5ac5ba03190b7c" namespace=k8s.io protocol=ttrpc version=3 Nov 24 07:01:04.115189 systemd[1]: Started cri-containerd-f2ab7011d58288cb6ac7245958fd266f17ae5e6c128aec9d6a394eda3a9eba63.scope - libcontainer container f2ab7011d58288cb6ac7245958fd266f17ae5e6c128aec9d6a394eda3a9eba63. Nov 24 07:01:04.143798 kubelet[2674]: I1124 07:01:04.143636 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wvmj\" (UniqueName: \"kubernetes.io/projected/4d9fde8d-2400-4ee8-8e69-b787b4b5e420-kube-api-access-9wvmj\") pod \"tigera-operator-7dcd859c48-pl9d7\" (UID: \"4d9fde8d-2400-4ee8-8e69-b787b4b5e420\") " pod="tigera-operator/tigera-operator-7dcd859c48-pl9d7" Nov 24 07:01:04.143994 kubelet[2674]: I1124 07:01:04.143845 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4d9fde8d-2400-4ee8-8e69-b787b4b5e420-var-lib-calico\") pod \"tigera-operator-7dcd859c48-pl9d7\" (UID: \"4d9fde8d-2400-4ee8-8e69-b787b4b5e420\") " pod="tigera-operator/tigera-operator-7dcd859c48-pl9d7" Nov 24 07:01:04.154739 containerd[1499]: time="2025-11-24T07:01:04.154660838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qz6fp,Uid:c640a98a-0977-42f4-a76e-f54d3f392951,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2ab7011d58288cb6ac7245958fd266f17ae5e6c128aec9d6a394eda3a9eba63\"" Nov 24 07:01:04.156188 kubelet[2674]: E1124 07:01:04.156148 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:04.160892 containerd[1499]: time="2025-11-24T07:01:04.160669099Z" level=info msg="CreateContainer within sandbox \"f2ab7011d58288cb6ac7245958fd266f17ae5e6c128aec9d6a394eda3a9eba63\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 24 07:01:04.173057 containerd[1499]: time="2025-11-24T07:01:04.173014628Z" level=info msg="Container 321edd400446a7f9629b9d4e27dd8033ae7c1e92cbf683489bc8e50d8b216a78: CDI devices from CRI Config.CDIDevices: []" Nov 24 07:01:04.180906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1249733359.mount: Deactivated successfully. Nov 24 07:01:04.186522 containerd[1499]: time="2025-11-24T07:01:04.186423381Z" level=info msg="CreateContainer within sandbox \"f2ab7011d58288cb6ac7245958fd266f17ae5e6c128aec9d6a394eda3a9eba63\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"321edd400446a7f9629b9d4e27dd8033ae7c1e92cbf683489bc8e50d8b216a78\"" Nov 24 07:01:04.188536 containerd[1499]: time="2025-11-24T07:01:04.188008009Z" level=info msg="StartContainer for \"321edd400446a7f9629b9d4e27dd8033ae7c1e92cbf683489bc8e50d8b216a78\"" Nov 24 07:01:04.190992 containerd[1499]: time="2025-11-24T07:01:04.190945952Z" level=info msg="connecting to shim 321edd400446a7f9629b9d4e27dd8033ae7c1e92cbf683489bc8e50d8b216a78" address="unix:///run/containerd/s/54dd3cc4aa5bcfd06c2e546ee92d3b194ca8c0f3694c148bef5ac5ba03190b7c" protocol=ttrpc version=3 Nov 24 07:01:04.216195 systemd[1]: Started cri-containerd-321edd400446a7f9629b9d4e27dd8033ae7c1e92cbf683489bc8e50d8b216a78.scope - libcontainer container 321edd400446a7f9629b9d4e27dd8033ae7c1e92cbf683489bc8e50d8b216a78. Nov 24 07:01:04.303657 containerd[1499]: time="2025-11-24T07:01:04.303516506Z" level=info msg="StartContainer for \"321edd400446a7f9629b9d4e27dd8033ae7c1e92cbf683489bc8e50d8b216a78\" returns successfully" Nov 24 07:01:04.355073 containerd[1499]: time="2025-11-24T07:01:04.354711497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-pl9d7,Uid:4d9fde8d-2400-4ee8-8e69-b787b4b5e420,Namespace:tigera-operator,Attempt:0,}" Nov 24 07:01:04.379827 containerd[1499]: time="2025-11-24T07:01:04.379131258Z" level=info msg="connecting to shim 3e81c280ddca75a30515a4f2dc4d0e067ccf106cb005a4f5a295e92d176fcf88" address="unix:///run/containerd/s/ebf67c909f2e0fc0ad9ef35d3c56e091ec07154beea78791e8125c628163e022" namespace=k8s.io protocol=ttrpc version=3 Nov 24 07:01:04.418273 systemd[1]: Started cri-containerd-3e81c280ddca75a30515a4f2dc4d0e067ccf106cb005a4f5a295e92d176fcf88.scope - libcontainer container 3e81c280ddca75a30515a4f2dc4d0e067ccf106cb005a4f5a295e92d176fcf88. Nov 24 07:01:04.523794 containerd[1499]: time="2025-11-24T07:01:04.523632870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-pl9d7,Uid:4d9fde8d-2400-4ee8-8e69-b787b4b5e420,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3e81c280ddca75a30515a4f2dc4d0e067ccf106cb005a4f5a295e92d176fcf88\"" Nov 24 07:01:04.527405 containerd[1499]: time="2025-11-24T07:01:04.527269579Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 24 07:01:04.530295 systemd-resolved[1376]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Nov 24 07:01:04.596405 kubelet[2674]: E1124 07:01:04.596266 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:05.611142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1107964938.mount: Deactivated successfully. Nov 24 07:01:05.867413 kubelet[2674]: E1124 07:01:05.866247 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:05.899832 kubelet[2674]: I1124 07:01:05.899582 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qz6fp" podStartSLOduration=2.899551844 podStartE2EDuration="2.899551844s" podCreationTimestamp="2025-11-24 07:01:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 07:01:04.612792532 +0000 UTC m=+5.276957368" watchObservedRunningTime="2025-11-24 07:01:05.899551844 +0000 UTC m=+6.563716721" Nov 24 07:01:06.432587 containerd[1499]: time="2025-11-24T07:01:06.432520825Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:01:06.433737 containerd[1499]: time="2025-11-24T07:01:06.433489932Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 24 07:01:06.434709 containerd[1499]: time="2025-11-24T07:01:06.434666309Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:01:06.437079 containerd[1499]: time="2025-11-24T07:01:06.437035648Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:01:06.438632 containerd[1499]: time="2025-11-24T07:01:06.438089646Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.910129774s" Nov 24 07:01:06.438632 containerd[1499]: time="2025-11-24T07:01:06.438134159Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 24 07:01:06.441649 containerd[1499]: time="2025-11-24T07:01:06.441600081Z" level=info msg="CreateContainer within sandbox \"3e81c280ddca75a30515a4f2dc4d0e067ccf106cb005a4f5a295e92d176fcf88\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 24 07:01:06.454132 containerd[1499]: time="2025-11-24T07:01:06.453824248Z" level=info msg="Container 6bc4e1a0088cd9a27b1976dd3fab75ecb35c3bc9bd3409cad93ee86a19d61b57: CDI devices from CRI Config.CDIDevices: []" Nov 24 07:01:06.463651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount90180538.mount: Deactivated successfully. Nov 24 07:01:06.470361 containerd[1499]: time="2025-11-24T07:01:06.470310949Z" level=info msg="CreateContainer within sandbox \"3e81c280ddca75a30515a4f2dc4d0e067ccf106cb005a4f5a295e92d176fcf88\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6bc4e1a0088cd9a27b1976dd3fab75ecb35c3bc9bd3409cad93ee86a19d61b57\"" Nov 24 07:01:06.471744 containerd[1499]: time="2025-11-24T07:01:06.471653605Z" level=info msg="StartContainer for \"6bc4e1a0088cd9a27b1976dd3fab75ecb35c3bc9bd3409cad93ee86a19d61b57\"" Nov 24 07:01:06.473281 containerd[1499]: time="2025-11-24T07:01:06.473212447Z" level=info msg="connecting to shim 6bc4e1a0088cd9a27b1976dd3fab75ecb35c3bc9bd3409cad93ee86a19d61b57" address="unix:///run/containerd/s/ebf67c909f2e0fc0ad9ef35d3c56e091ec07154beea78791e8125c628163e022" protocol=ttrpc version=3 Nov 24 07:01:06.506268 systemd[1]: Started cri-containerd-6bc4e1a0088cd9a27b1976dd3fab75ecb35c3bc9bd3409cad93ee86a19d61b57.scope - libcontainer container 6bc4e1a0088cd9a27b1976dd3fab75ecb35c3bc9bd3409cad93ee86a19d61b57. Nov 24 07:01:06.556626 containerd[1499]: time="2025-11-24T07:01:06.556567461Z" level=info msg="StartContainer for \"6bc4e1a0088cd9a27b1976dd3fab75ecb35c3bc9bd3409cad93ee86a19d61b57\" returns successfully" Nov 24 07:01:06.609668 kubelet[2674]: E1124 07:01:06.609632 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:06.622560 kubelet[2674]: I1124 07:01:06.622220 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-pl9d7" podStartSLOduration=1.708388814 podStartE2EDuration="3.622203004s" podCreationTimestamp="2025-11-24 07:01:03 +0000 UTC" firstStartedPulling="2025-11-24 07:01:04.525584095 +0000 UTC m=+5.189748938" lastFinishedPulling="2025-11-24 07:01:06.439398305 +0000 UTC m=+7.103563128" observedRunningTime="2025-11-24 07:01:06.62111067 +0000 UTC m=+7.285275515" watchObservedRunningTime="2025-11-24 07:01:06.622203004 +0000 UTC m=+7.286367849" Nov 24 07:01:12.647257 kubelet[2674]: E1124 07:01:12.646308 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:13.517889 sudo[1754]: pam_unix(sudo:session): session closed for user root Nov 24 07:01:13.523939 sshd[1753]: Connection closed by 139.178.68.195 port 35264 Nov 24 07:01:13.526478 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Nov 24 07:01:13.534425 systemd[1]: sshd@6-24.144.92.64:22-139.178.68.195:35264.service: Deactivated successfully. Nov 24 07:01:13.541508 systemd[1]: session-7.scope: Deactivated successfully. Nov 24 07:01:13.542021 systemd[1]: session-7.scope: Consumed 5.344s CPU time, 157.5M memory peak. Nov 24 07:01:13.545121 systemd-logind[1472]: Session 7 logged out. Waiting for processes to exit. Nov 24 07:01:13.550142 kubelet[2674]: E1124 07:01:13.549296 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:13.550892 systemd-logind[1472]: Removed session 7. Nov 24 07:01:13.627581 kubelet[2674]: E1124 07:01:13.627523 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:14.499090 update_engine[1473]: I20251124 07:01:14.498975 1473 update_attempter.cc:509] Updating boot flags... Nov 24 07:01:21.090454 systemd[1]: Created slice kubepods-besteffort-pod0f8f6935_83eb_48c1_9359_147d6a162770.slice - libcontainer container kubepods-besteffort-pod0f8f6935_83eb_48c1_9359_147d6a162770.slice. Nov 24 07:01:21.173142 kubelet[2674]: I1124 07:01:21.173087 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f8f6935-83eb-48c1-9359-147d6a162770-tigera-ca-bundle\") pod \"calico-typha-59fb8bc446-9v2z9\" (UID: \"0f8f6935-83eb-48c1-9359-147d6a162770\") " pod="calico-system/calico-typha-59fb8bc446-9v2z9" Nov 24 07:01:21.173142 kubelet[2674]: I1124 07:01:21.173153 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0f8f6935-83eb-48c1-9359-147d6a162770-typha-certs\") pod \"calico-typha-59fb8bc446-9v2z9\" (UID: \"0f8f6935-83eb-48c1-9359-147d6a162770\") " pod="calico-system/calico-typha-59fb8bc446-9v2z9" Nov 24 07:01:21.173142 kubelet[2674]: I1124 07:01:21.173199 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kk84\" (UniqueName: \"kubernetes.io/projected/0f8f6935-83eb-48c1-9359-147d6a162770-kube-api-access-8kk84\") pod \"calico-typha-59fb8bc446-9v2z9\" (UID: \"0f8f6935-83eb-48c1-9359-147d6a162770\") " pod="calico-system/calico-typha-59fb8bc446-9v2z9" Nov 24 07:01:21.235187 systemd[1]: Created slice kubepods-besteffort-podda1803b3_1d1c_4cd0_ae43_18c51b2788e8.slice - libcontainer container kubepods-besteffort-podda1803b3_1d1c_4cd0_ae43_18c51b2788e8.slice. Nov 24 07:01:21.274396 kubelet[2674]: I1124 07:01:21.274281 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/da1803b3-1d1c-4cd0-ae43-18c51b2788e8-cni-bin-dir\") pod \"calico-node-5czdb\" (UID: \"da1803b3-1d1c-4cd0-ae43-18c51b2788e8\") " pod="calico-system/calico-node-5czdb" Nov 24 07:01:21.274396 kubelet[2674]: I1124 07:01:21.274348 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl697\" (UniqueName: \"kubernetes.io/projected/da1803b3-1d1c-4cd0-ae43-18c51b2788e8-kube-api-access-vl697\") pod \"calico-node-5czdb\" (UID: \"da1803b3-1d1c-4cd0-ae43-18c51b2788e8\") " pod="calico-system/calico-node-5czdb" Nov 24 07:01:21.274706 kubelet[2674]: I1124 07:01:21.274422 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da1803b3-1d1c-4cd0-ae43-18c51b2788e8-xtables-lock\") pod \"calico-node-5czdb\" (UID: \"da1803b3-1d1c-4cd0-ae43-18c51b2788e8\") " pod="calico-system/calico-node-5czdb" Nov 24 07:01:21.274706 kubelet[2674]: I1124 07:01:21.274470 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/da1803b3-1d1c-4cd0-ae43-18c51b2788e8-policysync\") pod \"calico-node-5czdb\" (UID: \"da1803b3-1d1c-4cd0-ae43-18c51b2788e8\") " pod="calico-system/calico-node-5czdb" Nov 24 07:01:21.274706 kubelet[2674]: I1124 07:01:21.274502 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/da1803b3-1d1c-4cd0-ae43-18c51b2788e8-cni-log-dir\") pod \"calico-node-5czdb\" (UID: \"da1803b3-1d1c-4cd0-ae43-18c51b2788e8\") " pod="calico-system/calico-node-5czdb" Nov 24 07:01:21.274706 kubelet[2674]: I1124 07:01:21.274552 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da1803b3-1d1c-4cd0-ae43-18c51b2788e8-tigera-ca-bundle\") pod \"calico-node-5czdb\" (UID: \"da1803b3-1d1c-4cd0-ae43-18c51b2788e8\") " pod="calico-system/calico-node-5czdb" Nov 24 07:01:21.274706 kubelet[2674]: I1124 07:01:21.274587 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/da1803b3-1d1c-4cd0-ae43-18c51b2788e8-var-run-calico\") pod \"calico-node-5czdb\" (UID: \"da1803b3-1d1c-4cd0-ae43-18c51b2788e8\") " pod="calico-system/calico-node-5czdb" Nov 24 07:01:21.274850 kubelet[2674]: I1124 07:01:21.274620 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/da1803b3-1d1c-4cd0-ae43-18c51b2788e8-var-lib-calico\") pod \"calico-node-5czdb\" (UID: \"da1803b3-1d1c-4cd0-ae43-18c51b2788e8\") " pod="calico-system/calico-node-5czdb" Nov 24 07:01:21.274850 kubelet[2674]: I1124 07:01:21.274667 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/da1803b3-1d1c-4cd0-ae43-18c51b2788e8-flexvol-driver-host\") pod \"calico-node-5czdb\" (UID: \"da1803b3-1d1c-4cd0-ae43-18c51b2788e8\") " pod="calico-system/calico-node-5czdb" Nov 24 07:01:21.274850 kubelet[2674]: I1124 07:01:21.274696 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/da1803b3-1d1c-4cd0-ae43-18c51b2788e8-node-certs\") pod \"calico-node-5czdb\" (UID: \"da1803b3-1d1c-4cd0-ae43-18c51b2788e8\") " pod="calico-system/calico-node-5czdb" Nov 24 07:01:21.274850 kubelet[2674]: I1124 07:01:21.274724 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/da1803b3-1d1c-4cd0-ae43-18c51b2788e8-cni-net-dir\") pod \"calico-node-5czdb\" (UID: \"da1803b3-1d1c-4cd0-ae43-18c51b2788e8\") " pod="calico-system/calico-node-5czdb" Nov 24 07:01:21.274850 kubelet[2674]: I1124 07:01:21.274750 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da1803b3-1d1c-4cd0-ae43-18c51b2788e8-lib-modules\") pod \"calico-node-5czdb\" (UID: \"da1803b3-1d1c-4cd0-ae43-18c51b2788e8\") " pod="calico-system/calico-node-5czdb" Nov 24 07:01:21.358682 kubelet[2674]: E1124 07:01:21.357721 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xq9xq" podUID="c34e73b2-4364-4998-a5a0-398cc36c9e15" Nov 24 07:01:21.379070 kubelet[2674]: E1124 07:01:21.379022 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.379070 kubelet[2674]: W1124 07:01:21.379058 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.380561 kubelet[2674]: E1124 07:01:21.380273 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.380730 kubelet[2674]: E1124 07:01:21.380622 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.380730 kubelet[2674]: W1124 07:01:21.380640 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.380730 kubelet[2674]: E1124 07:01:21.380666 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.380988 kubelet[2674]: E1124 07:01:21.380922 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.380988 kubelet[2674]: W1124 07:01:21.380936 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.380988 kubelet[2674]: E1124 07:01:21.380953 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.381352 kubelet[2674]: E1124 07:01:21.381311 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.381352 kubelet[2674]: W1124 07:01:21.381330 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.381352 kubelet[2674]: E1124 07:01:21.381347 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.382058 kubelet[2674]: E1124 07:01:21.382030 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.382058 kubelet[2674]: W1124 07:01:21.382050 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.382408 kubelet[2674]: E1124 07:01:21.382069 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.384048 kubelet[2674]: E1124 07:01:21.384003 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.384048 kubelet[2674]: W1124 07:01:21.384028 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.384048 kubelet[2674]: E1124 07:01:21.384051 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.384930 kubelet[2674]: E1124 07:01:21.384301 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.384930 kubelet[2674]: W1124 07:01:21.384322 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.384930 kubelet[2674]: E1124 07:01:21.384342 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.386750 kubelet[2674]: E1124 07:01:21.386618 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.386750 kubelet[2674]: W1124 07:01:21.386642 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.386750 kubelet[2674]: E1124 07:01:21.386661 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.388704 kubelet[2674]: E1124 07:01:21.388418 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.388704 kubelet[2674]: W1124 07:01:21.388442 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.388704 kubelet[2674]: E1124 07:01:21.388464 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.389442 kubelet[2674]: E1124 07:01:21.389336 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.389442 kubelet[2674]: W1124 07:01:21.389357 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.389442 kubelet[2674]: E1124 07:01:21.389380 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.390719 kubelet[2674]: E1124 07:01:21.390653 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.390719 kubelet[2674]: W1124 07:01:21.390674 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.391310 kubelet[2674]: E1124 07:01:21.390809 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.391310 kubelet[2674]: E1124 07:01:21.390922 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.391310 kubelet[2674]: W1124 07:01:21.390935 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.391462 kubelet[2674]: E1124 07:01:21.391413 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.391462 kubelet[2674]: W1124 07:01:21.391427 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.391912 kubelet[2674]: E1124 07:01:21.391584 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.391912 kubelet[2674]: E1124 07:01:21.391627 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.392096 kubelet[2674]: E1124 07:01:21.392000 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.392096 kubelet[2674]: W1124 07:01:21.392011 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.392996 kubelet[2674]: E1124 07:01:21.392951 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.393417 kubelet[2674]: E1124 07:01:21.393374 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.393417 kubelet[2674]: W1124 07:01:21.393391 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.393417 kubelet[2674]: E1124 07:01:21.393408 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.394036 kubelet[2674]: E1124 07:01:21.393617 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.394036 kubelet[2674]: W1124 07:01:21.393626 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.394036 kubelet[2674]: E1124 07:01:21.393647 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.394036 kubelet[2674]: E1124 07:01:21.393854 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.394036 kubelet[2674]: W1124 07:01:21.393866 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.394504 kubelet[2674]: E1124 07:01:21.394387 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.398882 kubelet[2674]: E1124 07:01:21.398805 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:21.399841 kubelet[2674]: E1124 07:01:21.397486 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.399841 kubelet[2674]: W1124 07:01:21.399829 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.400097 kubelet[2674]: E1124 07:01:21.399864 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.401282 kubelet[2674]: E1124 07:01:21.401223 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.404118 kubelet[2674]: W1124 07:01:21.401244 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.404118 kubelet[2674]: E1124 07:01:21.401445 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.404118 kubelet[2674]: E1124 07:01:21.402711 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.404118 kubelet[2674]: W1124 07:01:21.402736 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.404118 kubelet[2674]: E1124 07:01:21.402767 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.404293 containerd[1499]: time="2025-11-24T07:01:21.401504861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59fb8bc446-9v2z9,Uid:0f8f6935-83eb-48c1-9359-147d6a162770,Namespace:calico-system,Attempt:0,}" Nov 24 07:01:21.407854 kubelet[2674]: E1124 07:01:21.406403 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.407854 kubelet[2674]: W1124 07:01:21.406450 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.407854 kubelet[2674]: E1124 07:01:21.406481 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.410703 kubelet[2674]: E1124 07:01:21.410662 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.410703 kubelet[2674]: W1124 07:01:21.410690 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.412434 kubelet[2674]: E1124 07:01:21.410740 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.441055 kubelet[2674]: E1124 07:01:21.440011 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.441055 kubelet[2674]: W1124 07:01:21.440048 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.441055 kubelet[2674]: E1124 07:01:21.440078 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.444439 kubelet[2674]: E1124 07:01:21.444403 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.444820 kubelet[2674]: W1124 07:01:21.444636 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.444820 kubelet[2674]: E1124 07:01:21.444673 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.446147 kubelet[2674]: E1124 07:01:21.445997 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.446147 kubelet[2674]: W1124 07:01:21.446025 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.446548 kubelet[2674]: E1124 07:01:21.446051 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.448266 kubelet[2674]: E1124 07:01:21.448015 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.448266 kubelet[2674]: W1124 07:01:21.448041 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.448266 kubelet[2674]: E1124 07:01:21.448065 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.448661 kubelet[2674]: E1124 07:01:21.448493 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.448661 kubelet[2674]: W1124 07:01:21.448508 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.448661 kubelet[2674]: E1124 07:01:21.448523 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.448974 kubelet[2674]: E1124 07:01:21.448832 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.448974 kubelet[2674]: W1124 07:01:21.448844 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.448974 kubelet[2674]: E1124 07:01:21.448856 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.449202 kubelet[2674]: E1124 07:01:21.449131 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.449202 kubelet[2674]: W1124 07:01:21.449143 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.449202 kubelet[2674]: E1124 07:01:21.449161 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.449484 kubelet[2674]: E1124 07:01:21.449469 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.449560 kubelet[2674]: W1124 07:01:21.449549 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.449669 kubelet[2674]: E1124 07:01:21.449655 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.452483 kubelet[2674]: E1124 07:01:21.452331 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.452483 kubelet[2674]: W1124 07:01:21.452357 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.452483 kubelet[2674]: E1124 07:01:21.452387 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.454081 kubelet[2674]: E1124 07:01:21.453891 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.454081 kubelet[2674]: W1124 07:01:21.453955 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.454081 kubelet[2674]: E1124 07:01:21.453991 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.454627 kubelet[2674]: E1124 07:01:21.454556 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.454627 kubelet[2674]: W1124 07:01:21.454572 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.454627 kubelet[2674]: E1124 07:01:21.454589 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.455684 kubelet[2674]: E1124 07:01:21.455338 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.455684 kubelet[2674]: W1124 07:01:21.455582 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.455684 kubelet[2674]: E1124 07:01:21.455600 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.456852 kubelet[2674]: E1124 07:01:21.456828 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.457027 kubelet[2674]: W1124 07:01:21.456925 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.457027 kubelet[2674]: E1124 07:01:21.456943 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.458241 kubelet[2674]: E1124 07:01:21.458185 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.458241 kubelet[2674]: W1124 07:01:21.458204 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.458241 kubelet[2674]: E1124 07:01:21.458221 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.458991 kubelet[2674]: E1124 07:01:21.458815 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.458991 kubelet[2674]: W1124 07:01:21.458920 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.458991 kubelet[2674]: E1124 07:01:21.458938 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.460090 kubelet[2674]: E1124 07:01:21.459889 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.460090 kubelet[2674]: W1124 07:01:21.459958 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.460090 kubelet[2674]: E1124 07:01:21.459973 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.460889 kubelet[2674]: E1124 07:01:21.460752 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.460889 kubelet[2674]: W1124 07:01:21.460767 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.460889 kubelet[2674]: E1124 07:01:21.460781 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.461767 kubelet[2674]: E1124 07:01:21.461751 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.462011 kubelet[2674]: W1124 07:01:21.461821 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.462011 kubelet[2674]: E1124 07:01:21.461864 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.462868 kubelet[2674]: E1124 07:01:21.462678 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.462868 kubelet[2674]: W1124 07:01:21.462816 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.462868 kubelet[2674]: E1124 07:01:21.462833 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.464048 kubelet[2674]: E1124 07:01:21.463990 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.464562 kubelet[2674]: W1124 07:01:21.464328 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.464562 kubelet[2674]: E1124 07:01:21.464353 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.464933 kubelet[2674]: E1124 07:01:21.464874 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.465173 kubelet[2674]: W1124 07:01:21.465005 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.465173 kubelet[2674]: E1124 07:01:21.465021 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.469597 containerd[1499]: time="2025-11-24T07:01:21.469490981Z" level=info msg="connecting to shim d88310b2da1ffad0689696ce6d729b1b9ac0c128c184bba6d2ef5e4b443e7ea9" address="unix:///run/containerd/s/6bf2e8f3da09c2c97625048a63f7c84255c2ef7ec6fa2fea593f489b46650d22" namespace=k8s.io protocol=ttrpc version=3 Nov 24 07:01:21.478800 kubelet[2674]: E1124 07:01:21.478716 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.479559 kubelet[2674]: W1124 07:01:21.479199 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.479559 kubelet[2674]: E1124 07:01:21.479253 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.479559 kubelet[2674]: I1124 07:01:21.479337 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c34e73b2-4364-4998-a5a0-398cc36c9e15-registration-dir\") pod \"csi-node-driver-xq9xq\" (UID: \"c34e73b2-4364-4998-a5a0-398cc36c9e15\") " pod="calico-system/csi-node-driver-xq9xq" Nov 24 07:01:21.480233 kubelet[2674]: E1124 07:01:21.480209 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.480513 kubelet[2674]: W1124 07:01:21.480343 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.480513 kubelet[2674]: E1124 07:01:21.480376 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.480753 kubelet[2674]: I1124 07:01:21.480734 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c34e73b2-4364-4998-a5a0-398cc36c9e15-varrun\") pod \"csi-node-driver-xq9xq\" (UID: \"c34e73b2-4364-4998-a5a0-398cc36c9e15\") " pod="calico-system/csi-node-driver-xq9xq" Nov 24 07:01:21.482038 kubelet[2674]: E1124 07:01:21.480837 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.482038 kubelet[2674]: W1124 07:01:21.482000 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.482038 kubelet[2674]: E1124 07:01:21.482039 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.482353 kubelet[2674]: E1124 07:01:21.482257 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.482353 kubelet[2674]: W1124 07:01:21.482265 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.482430 kubelet[2674]: E1124 07:01:21.482424 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.482463 kubelet[2674]: W1124 07:01:21.482431 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.482834 kubelet[2674]: E1124 07:01:21.482548 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.482834 kubelet[2674]: W1124 07:01:21.482558 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.482834 kubelet[2674]: E1124 07:01:21.482568 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.482834 kubelet[2674]: E1124 07:01:21.482577 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.482834 kubelet[2674]: E1124 07:01:21.482580 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.482834 kubelet[2674]: I1124 07:01:21.482601 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c34e73b2-4364-4998-a5a0-398cc36c9e15-kubelet-dir\") pod \"csi-node-driver-xq9xq\" (UID: \"c34e73b2-4364-4998-a5a0-398cc36c9e15\") " pod="calico-system/csi-node-driver-xq9xq" Nov 24 07:01:21.483713 kubelet[2674]: E1124 07:01:21.483450 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.483713 kubelet[2674]: W1124 07:01:21.483491 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.483713 kubelet[2674]: E1124 07:01:21.483526 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.485112 kubelet[2674]: E1124 07:01:21.485069 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.486014 kubelet[2674]: W1124 07:01:21.485458 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.486014 kubelet[2674]: E1124 07:01:21.485495 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.486445 kubelet[2674]: E1124 07:01:21.486427 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.486792 kubelet[2674]: W1124 07:01:21.486648 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.486792 kubelet[2674]: E1124 07:01:21.486679 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.488105 kubelet[2674]: E1124 07:01:21.488068 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.488105 kubelet[2674]: W1124 07:01:21.488095 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.488783 kubelet[2674]: E1124 07:01:21.488119 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.488783 kubelet[2674]: I1124 07:01:21.488154 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfsxg\" (UniqueName: \"kubernetes.io/projected/c34e73b2-4364-4998-a5a0-398cc36c9e15-kube-api-access-sfsxg\") pod \"csi-node-driver-xq9xq\" (UID: \"c34e73b2-4364-4998-a5a0-398cc36c9e15\") " pod="calico-system/csi-node-driver-xq9xq" Nov 24 07:01:21.488783 kubelet[2674]: E1124 07:01:21.488380 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.488783 kubelet[2674]: W1124 07:01:21.488389 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.488783 kubelet[2674]: E1124 07:01:21.488405 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.488783 kubelet[2674]: I1124 07:01:21.488423 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c34e73b2-4364-4998-a5a0-398cc36c9e15-socket-dir\") pod \"csi-node-driver-xq9xq\" (UID: \"c34e73b2-4364-4998-a5a0-398cc36c9e15\") " pod="calico-system/csi-node-driver-xq9xq" Nov 24 07:01:21.489957 kubelet[2674]: E1124 07:01:21.489790 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.490552 kubelet[2674]: W1124 07:01:21.490060 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.490552 kubelet[2674]: E1124 07:01:21.490216 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.491332 kubelet[2674]: E1124 07:01:21.491311 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.491574 kubelet[2674]: W1124 07:01:21.491493 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.491979 kubelet[2674]: E1124 07:01:21.491954 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.493055 kubelet[2674]: E1124 07:01:21.493009 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.493055 kubelet[2674]: W1124 07:01:21.493032 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.493055 kubelet[2674]: E1124 07:01:21.493051 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.493323 kubelet[2674]: E1124 07:01:21.493212 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.493323 kubelet[2674]: W1124 07:01:21.493223 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.493323 kubelet[2674]: E1124 07:01:21.493231 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.529201 systemd[1]: Started cri-containerd-d88310b2da1ffad0689696ce6d729b1b9ac0c128c184bba6d2ef5e4b443e7ea9.scope - libcontainer container d88310b2da1ffad0689696ce6d729b1b9ac0c128c184bba6d2ef5e4b443e7ea9. Nov 24 07:01:21.541223 kubelet[2674]: E1124 07:01:21.541180 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:21.543510 containerd[1499]: time="2025-11-24T07:01:21.543451214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5czdb,Uid:da1803b3-1d1c-4cd0-ae43-18c51b2788e8,Namespace:calico-system,Attempt:0,}" Nov 24 07:01:21.590819 kubelet[2674]: E1124 07:01:21.590759 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.591293 kubelet[2674]: W1124 07:01:21.590798 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.591293 kubelet[2674]: E1124 07:01:21.591076 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.592096 kubelet[2674]: E1124 07:01:21.592018 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.592096 kubelet[2674]: W1124 07:01:21.592039 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.592096 kubelet[2674]: E1124 07:01:21.592079 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.592629 kubelet[2674]: E1124 07:01:21.592281 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.592629 kubelet[2674]: W1124 07:01:21.592289 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.592629 kubelet[2674]: E1124 07:01:21.592300 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.592629 kubelet[2674]: E1124 07:01:21.592479 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.592629 kubelet[2674]: W1124 07:01:21.592488 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.593532 kubelet[2674]: E1124 07:01:21.593297 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.594638 kubelet[2674]: E1124 07:01:21.593629 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.594638 kubelet[2674]: W1124 07:01:21.593644 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.594638 kubelet[2674]: E1124 07:01:21.593693 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.594638 kubelet[2674]: E1124 07:01:21.594444 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.594638 kubelet[2674]: W1124 07:01:21.594457 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.594638 kubelet[2674]: E1124 07:01:21.594471 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.595173 containerd[1499]: time="2025-11-24T07:01:21.593480408Z" level=info msg="connecting to shim a7cba6224beb50df54719c9c579a74b6f377f057f78ae7861c048eccc9b0e038" address="unix:///run/containerd/s/f1aa231ece507f56f41d4dac08fbac4e11711ef9112ac6cf4a28a4c976deface" namespace=k8s.io protocol=ttrpc version=3 Nov 24 07:01:21.595233 kubelet[2674]: E1124 07:01:21.594665 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.595233 kubelet[2674]: W1124 07:01:21.594671 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.595233 kubelet[2674]: E1124 07:01:21.594685 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.595233 kubelet[2674]: E1124 07:01:21.594875 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.595233 kubelet[2674]: W1124 07:01:21.594881 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.595233 kubelet[2674]: E1124 07:01:21.594890 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.595233 kubelet[2674]: E1124 07:01:21.595225 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.595233 kubelet[2674]: W1124 07:01:21.595233 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.595547 kubelet[2674]: E1124 07:01:21.595246 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.597189 kubelet[2674]: E1124 07:01:21.596155 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.597189 kubelet[2674]: W1124 07:01:21.596172 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.597189 kubelet[2674]: E1124 07:01:21.596187 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.597189 kubelet[2674]: E1124 07:01:21.596677 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.597189 kubelet[2674]: W1124 07:01:21.596691 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.597189 kubelet[2674]: E1124 07:01:21.596707 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.597189 kubelet[2674]: E1124 07:01:21.597191 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.597189 kubelet[2674]: W1124 07:01:21.597201 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.599083 kubelet[2674]: E1124 07:01:21.597233 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.599083 kubelet[2674]: E1124 07:01:21.597596 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.599083 kubelet[2674]: W1124 07:01:21.597607 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.599083 kubelet[2674]: E1124 07:01:21.597637 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.599083 kubelet[2674]: E1124 07:01:21.597945 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.599083 kubelet[2674]: W1124 07:01:21.597956 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.599083 kubelet[2674]: E1124 07:01:21.597974 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.601420 kubelet[2674]: E1124 07:01:21.600966 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.601420 kubelet[2674]: W1124 07:01:21.600995 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.601420 kubelet[2674]: E1124 07:01:21.601033 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.601420 kubelet[2674]: E1124 07:01:21.601360 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.601420 kubelet[2674]: W1124 07:01:21.601372 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.602854 kubelet[2674]: E1124 07:01:21.602569 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.603762 kubelet[2674]: E1124 07:01:21.603379 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.603762 kubelet[2674]: W1124 07:01:21.603681 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.603762 kubelet[2674]: E1124 07:01:21.603737 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.604559 kubelet[2674]: E1124 07:01:21.604462 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.605074 kubelet[2674]: W1124 07:01:21.604951 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.605288 kubelet[2674]: E1124 07:01:21.605174 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.605864 kubelet[2674]: E1124 07:01:21.605738 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.605864 kubelet[2674]: W1124 07:01:21.605755 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.606583 kubelet[2674]: E1124 07:01:21.606470 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.606971 kubelet[2674]: E1124 07:01:21.606945 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.607528 kubelet[2674]: W1124 07:01:21.607136 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.607992 kubelet[2674]: E1124 07:01:21.607949 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.608362 kubelet[2674]: E1124 07:01:21.608249 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.608668 kubelet[2674]: W1124 07:01:21.608456 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.610144 kubelet[2674]: E1124 07:01:21.608788 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.611362 kubelet[2674]: E1124 07:01:21.610310 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.611362 kubelet[2674]: W1124 07:01:21.610681 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.613334 kubelet[2674]: E1124 07:01:21.613285 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.614501 kubelet[2674]: E1124 07:01:21.613706 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.614501 kubelet[2674]: W1124 07:01:21.613724 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.614501 kubelet[2674]: E1124 07:01:21.613784 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.615927 kubelet[2674]: E1124 07:01:21.615688 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.616511 kubelet[2674]: W1124 07:01:21.616277 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.617175 kubelet[2674]: E1124 07:01:21.616941 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.618750 kubelet[2674]: E1124 07:01:21.618283 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.618750 kubelet[2674]: W1124 07:01:21.618357 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.618750 kubelet[2674]: E1124 07:01:21.618377 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.633517 systemd[1]: Started cri-containerd-a7cba6224beb50df54719c9c579a74b6f377f057f78ae7861c048eccc9b0e038.scope - libcontainer container a7cba6224beb50df54719c9c579a74b6f377f057f78ae7861c048eccc9b0e038. Nov 24 07:01:21.641539 kubelet[2674]: E1124 07:01:21.641503 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:21.641539 kubelet[2674]: W1124 07:01:21.641531 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:21.641796 kubelet[2674]: E1124 07:01:21.641561 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:21.735680 containerd[1499]: time="2025-11-24T07:01:21.735621569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59fb8bc446-9v2z9,Uid:0f8f6935-83eb-48c1-9359-147d6a162770,Namespace:calico-system,Attempt:0,} returns sandbox id \"d88310b2da1ffad0689696ce6d729b1b9ac0c128c184bba6d2ef5e4b443e7ea9\"" Nov 24 07:01:21.737039 kubelet[2674]: E1124 07:01:21.736875 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:21.739229 containerd[1499]: time="2025-11-24T07:01:21.739188678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 24 07:01:21.821372 containerd[1499]: time="2025-11-24T07:01:21.821197307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5czdb,Uid:da1803b3-1d1c-4cd0-ae43-18c51b2788e8,Namespace:calico-system,Attempt:0,} returns sandbox id \"a7cba6224beb50df54719c9c579a74b6f377f057f78ae7861c048eccc9b0e038\"" Nov 24 07:01:21.824007 kubelet[2674]: E1124 07:01:21.823915 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:23.138972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1411021372.mount: Deactivated successfully. Nov 24 07:01:23.528052 kubelet[2674]: E1124 07:01:23.527988 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xq9xq" podUID="c34e73b2-4364-4998-a5a0-398cc36c9e15" Nov 24 07:01:23.896044 containerd[1499]: time="2025-11-24T07:01:23.895285433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:01:23.896483 containerd[1499]: time="2025-11-24T07:01:23.896118089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 24 07:01:23.896951 containerd[1499]: time="2025-11-24T07:01:23.896922113Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:01:23.899422 containerd[1499]: time="2025-11-24T07:01:23.899359893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:01:23.900235 containerd[1499]: time="2025-11-24T07:01:23.900200044Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.160956091s" Nov 24 07:01:23.900388 containerd[1499]: time="2025-11-24T07:01:23.900373836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 24 07:01:23.903045 containerd[1499]: time="2025-11-24T07:01:23.902740817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 24 07:01:23.926459 containerd[1499]: time="2025-11-24T07:01:23.926389750Z" level=info msg="CreateContainer within sandbox \"d88310b2da1ffad0689696ce6d729b1b9ac0c128c184bba6d2ef5e4b443e7ea9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 24 07:01:23.936930 containerd[1499]: time="2025-11-24T07:01:23.936060274Z" level=info msg="Container a8c89fa5a406b117f36c11e8ea0365f146cea569c942c63e55730037c1ffdbd8: CDI devices from CRI Config.CDIDevices: []" Nov 24 07:01:23.945934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4115620126.mount: Deactivated successfully. Nov 24 07:01:23.974409 containerd[1499]: time="2025-11-24T07:01:23.974360483Z" level=info msg="CreateContainer within sandbox \"d88310b2da1ffad0689696ce6d729b1b9ac0c128c184bba6d2ef5e4b443e7ea9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a8c89fa5a406b117f36c11e8ea0365f146cea569c942c63e55730037c1ffdbd8\"" Nov 24 07:01:23.976955 containerd[1499]: time="2025-11-24T07:01:23.975159536Z" level=info msg="StartContainer for \"a8c89fa5a406b117f36c11e8ea0365f146cea569c942c63e55730037c1ffdbd8\"" Nov 24 07:01:23.978155 containerd[1499]: time="2025-11-24T07:01:23.978065334Z" level=info msg="connecting to shim a8c89fa5a406b117f36c11e8ea0365f146cea569c942c63e55730037c1ffdbd8" address="unix:///run/containerd/s/6bf2e8f3da09c2c97625048a63f7c84255c2ef7ec6fa2fea593f489b46650d22" protocol=ttrpc version=3 Nov 24 07:01:24.012232 systemd[1]: Started cri-containerd-a8c89fa5a406b117f36c11e8ea0365f146cea569c942c63e55730037c1ffdbd8.scope - libcontainer container a8c89fa5a406b117f36c11e8ea0365f146cea569c942c63e55730037c1ffdbd8. Nov 24 07:01:24.109530 containerd[1499]: time="2025-11-24T07:01:24.109465788Z" level=info msg="StartContainer for \"a8c89fa5a406b117f36c11e8ea0365f146cea569c942c63e55730037c1ffdbd8\" returns successfully" Nov 24 07:01:24.668720 kubelet[2674]: E1124 07:01:24.668665 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:24.687208 kubelet[2674]: I1124 07:01:24.687021 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-59fb8bc446-9v2z9" podStartSLOduration=1.524469181 podStartE2EDuration="3.686992511s" podCreationTimestamp="2025-11-24 07:01:21 +0000 UTC" firstStartedPulling="2025-11-24 07:01:21.738804049 +0000 UTC m=+22.402968872" lastFinishedPulling="2025-11-24 07:01:23.901327363 +0000 UTC m=+24.565492202" observedRunningTime="2025-11-24 07:01:24.686363856 +0000 UTC m=+25.350528701" watchObservedRunningTime="2025-11-24 07:01:24.686992511 +0000 UTC m=+25.351157357" Nov 24 07:01:24.691048 kubelet[2674]: E1124 07:01:24.691012 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.691609 kubelet[2674]: W1124 07:01:24.691292 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.691609 kubelet[2674]: E1124 07:01:24.691337 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.692288 kubelet[2674]: E1124 07:01:24.692096 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.692288 kubelet[2674]: W1124 07:01:24.692116 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.692288 kubelet[2674]: E1124 07:01:24.692145 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.692801 kubelet[2674]: E1124 07:01:24.692692 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.692801 kubelet[2674]: W1124 07:01:24.692712 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.692801 kubelet[2674]: E1124 07:01:24.692731 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.693520 kubelet[2674]: E1124 07:01:24.693285 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.693520 kubelet[2674]: W1124 07:01:24.693304 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.693520 kubelet[2674]: E1124 07:01:24.693321 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.693702 kubelet[2674]: E1124 07:01:24.693656 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.693702 kubelet[2674]: W1124 07:01:24.693685 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.693702 kubelet[2674]: E1124 07:01:24.693702 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.693999 kubelet[2674]: E1124 07:01:24.693977 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.693999 kubelet[2674]: W1124 07:01:24.693992 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.694235 kubelet[2674]: E1124 07:01:24.694006 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.694235 kubelet[2674]: E1124 07:01:24.694190 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.694235 kubelet[2674]: W1124 07:01:24.694197 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.694235 kubelet[2674]: E1124 07:01:24.694205 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.694403 kubelet[2674]: E1124 07:01:24.694353 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.694403 kubelet[2674]: W1124 07:01:24.694363 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.694403 kubelet[2674]: E1124 07:01:24.694371 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.694849 kubelet[2674]: E1124 07:01:24.694556 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.694849 kubelet[2674]: W1124 07:01:24.694565 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.694849 kubelet[2674]: E1124 07:01:24.694577 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.694849 kubelet[2674]: E1124 07:01:24.694826 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.694849 kubelet[2674]: W1124 07:01:24.694841 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.695136 kubelet[2674]: E1124 07:01:24.694854 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.695136 kubelet[2674]: E1124 07:01:24.695030 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.695136 kubelet[2674]: W1124 07:01:24.695037 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.695136 kubelet[2674]: E1124 07:01:24.695045 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.695322 kubelet[2674]: E1124 07:01:24.695179 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.695322 kubelet[2674]: W1124 07:01:24.695186 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.695322 kubelet[2674]: E1124 07:01:24.695192 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.695322 kubelet[2674]: E1124 07:01:24.695322 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.695322 kubelet[2674]: W1124 07:01:24.695328 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.695552 kubelet[2674]: E1124 07:01:24.695335 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.695552 kubelet[2674]: E1124 07:01:24.695476 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.695552 kubelet[2674]: W1124 07:01:24.695482 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.695552 kubelet[2674]: E1124 07:01:24.695490 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.695734 kubelet[2674]: E1124 07:01:24.695613 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.695734 kubelet[2674]: W1124 07:01:24.695619 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.695734 kubelet[2674]: E1124 07:01:24.695625 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.719406 kubelet[2674]: E1124 07:01:24.719317 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.719406 kubelet[2674]: W1124 07:01:24.719346 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.719406 kubelet[2674]: E1124 07:01:24.719372 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.720037 kubelet[2674]: E1124 07:01:24.720002 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.720037 kubelet[2674]: W1124 07:01:24.720017 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.720210 kubelet[2674]: E1124 07:01:24.720161 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.720522 kubelet[2674]: E1124 07:01:24.720445 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.720522 kubelet[2674]: W1124 07:01:24.720457 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.720522 kubelet[2674]: E1124 07:01:24.720474 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.720907 kubelet[2674]: E1124 07:01:24.720868 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.720907 kubelet[2674]: W1124 07:01:24.720881 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.721018 kubelet[2674]: E1124 07:01:24.720999 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.721204 kubelet[2674]: E1124 07:01:24.721183 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.721204 kubelet[2674]: W1124 07:01:24.721203 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.721277 kubelet[2674]: E1124 07:01:24.721230 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.721447 kubelet[2674]: E1124 07:01:24.721436 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.721447 kubelet[2674]: W1124 07:01:24.721446 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.721525 kubelet[2674]: E1124 07:01:24.721461 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.721690 kubelet[2674]: E1124 07:01:24.721676 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.721721 kubelet[2674]: W1124 07:01:24.721692 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.721982 kubelet[2674]: E1124 07:01:24.721760 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.722071 kubelet[2674]: E1124 07:01:24.722045 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.722071 kubelet[2674]: W1124 07:01:24.722059 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.722178 kubelet[2674]: E1124 07:01:24.722163 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.722285 kubelet[2674]: E1124 07:01:24.722271 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.722285 kubelet[2674]: W1124 07:01:24.722282 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.722501 kubelet[2674]: E1124 07:01:24.722335 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.722501 kubelet[2674]: E1124 07:01:24.722421 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.722501 kubelet[2674]: W1124 07:01:24.722428 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.722501 kubelet[2674]: E1124 07:01:24.722444 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.722928 kubelet[2674]: E1124 07:01:24.722822 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.722928 kubelet[2674]: W1124 07:01:24.722836 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.722928 kubelet[2674]: E1124 07:01:24.722854 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.723245 kubelet[2674]: E1124 07:01:24.723227 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.723512 kubelet[2674]: W1124 07:01:24.723375 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.723512 kubelet[2674]: E1124 07:01:24.723409 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.723882 kubelet[2674]: E1124 07:01:24.723864 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.724160 kubelet[2674]: W1124 07:01:24.723938 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.724160 kubelet[2674]: E1124 07:01:24.723975 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.724548 kubelet[2674]: E1124 07:01:24.724462 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.724548 kubelet[2674]: W1124 07:01:24.724474 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.724548 kubelet[2674]: E1124 07:01:24.724502 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.724929 kubelet[2674]: E1124 07:01:24.724828 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.724929 kubelet[2674]: W1124 07:01:24.724840 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.725089 kubelet[2674]: E1124 07:01:24.725028 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.725208 kubelet[2674]: E1124 07:01:24.725189 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.725336 kubelet[2674]: W1124 07:01:24.725257 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.725336 kubelet[2674]: E1124 07:01:24.725285 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.725594 kubelet[2674]: E1124 07:01:24.725568 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.725594 kubelet[2674]: W1124 07:01:24.725580 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.725751 kubelet[2674]: E1124 07:01:24.725702 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:24.726192 kubelet[2674]: E1124 07:01:24.726125 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 07:01:24.726192 kubelet[2674]: W1124 07:01:24.726141 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 07:01:24.726192 kubelet[2674]: E1124 07:01:24.726157 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 07:01:25.180520 containerd[1499]: time="2025-11-24T07:01:25.179994131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:01:25.180520 containerd[1499]: time="2025-11-24T07:01:25.180474859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 24 07:01:25.181666 containerd[1499]: time="2025-11-24T07:01:25.181629211Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:01:25.184038 containerd[1499]: time="2025-11-24T07:01:25.183993818Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:01:25.184675 containerd[1499]: time="2025-11-24T07:01:25.184637519Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.281856757s" Nov 24 07:01:25.184767 containerd[1499]: time="2025-11-24T07:01:25.184674207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 24 07:01:25.187867 containerd[1499]: time="2025-11-24T07:01:25.187791664Z" level=info msg="CreateContainer within sandbox \"a7cba6224beb50df54719c9c579a74b6f377f057f78ae7861c048eccc9b0e038\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 24 07:01:25.203336 containerd[1499]: time="2025-11-24T07:01:25.203277405Z" level=info msg="Container 8ba8f5ab3a23832e86bfd1e31e21e25d70abce95e817210f46773e5a2d1d9675: CDI devices from CRI Config.CDIDevices: []" Nov 24 07:01:25.216356 containerd[1499]: time="2025-11-24T07:01:25.216307700Z" level=info msg="CreateContainer within sandbox \"a7cba6224beb50df54719c9c579a74b6f377f057f78ae7861c048eccc9b0e038\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8ba8f5ab3a23832e86bfd1e31e21e25d70abce95e817210f46773e5a2d1d9675\"" Nov 24 07:01:25.217660 containerd[1499]: time="2025-11-24T07:01:25.217625230Z" level=info msg="StartContainer for \"8ba8f5ab3a23832e86bfd1e31e21e25d70abce95e817210f46773e5a2d1d9675\"" Nov 24 07:01:25.220117 containerd[1499]: time="2025-11-24T07:01:25.220066630Z" level=info msg="connecting to shim 8ba8f5ab3a23832e86bfd1e31e21e25d70abce95e817210f46773e5a2d1d9675" address="unix:///run/containerd/s/f1aa231ece507f56f41d4dac08fbac4e11711ef9112ac6cf4a28a4c976deface" protocol=ttrpc version=3 Nov 24 07:01:25.262214 systemd[1]: Started cri-containerd-8ba8f5ab3a23832e86bfd1e31e21e25d70abce95e817210f46773e5a2d1d9675.scope - libcontainer container 8ba8f5ab3a23832e86bfd1e31e21e25d70abce95e817210f46773e5a2d1d9675. Nov 24 07:01:25.362136 containerd[1499]: time="2025-11-24T07:01:25.362087072Z" level=info msg="StartContainer for \"8ba8f5ab3a23832e86bfd1e31e21e25d70abce95e817210f46773e5a2d1d9675\" returns successfully" Nov 24 07:01:25.366730 systemd[1]: cri-containerd-8ba8f5ab3a23832e86bfd1e31e21e25d70abce95e817210f46773e5a2d1d9675.scope: Deactivated successfully. Nov 24 07:01:25.437074 containerd[1499]: time="2025-11-24T07:01:25.436891037Z" level=info msg="received container exit event container_id:\"8ba8f5ab3a23832e86bfd1e31e21e25d70abce95e817210f46773e5a2d1d9675\" id:\"8ba8f5ab3a23832e86bfd1e31e21e25d70abce95e817210f46773e5a2d1d9675\" pid:3377 exited_at:{seconds:1763967685 nanos:372045732}" Nov 24 07:01:25.506661 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ba8f5ab3a23832e86bfd1e31e21e25d70abce95e817210f46773e5a2d1d9675-rootfs.mount: Deactivated successfully. Nov 24 07:01:25.528370 kubelet[2674]: E1124 07:01:25.528320 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xq9xq" podUID="c34e73b2-4364-4998-a5a0-398cc36c9e15" Nov 24 07:01:25.675086 kubelet[2674]: I1124 07:01:25.675047 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 07:01:25.676812 kubelet[2674]: E1124 07:01:25.676782 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:25.676971 kubelet[2674]: E1124 07:01:25.675385 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:25.680232 containerd[1499]: time="2025-11-24T07:01:25.680028691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 24 07:01:27.531921 kubelet[2674]: E1124 07:01:27.531828 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xq9xq" podUID="c34e73b2-4364-4998-a5a0-398cc36c9e15" Nov 24 07:01:29.533475 kubelet[2674]: E1124 07:01:29.533415 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xq9xq" podUID="c34e73b2-4364-4998-a5a0-398cc36c9e15" Nov 24 07:01:29.991736 containerd[1499]: time="2025-11-24T07:01:29.991677835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:01:29.993328 containerd[1499]: time="2025-11-24T07:01:29.993249975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 24 07:01:29.993976 containerd[1499]: time="2025-11-24T07:01:29.993937576Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:01:29.996796 containerd[1499]: time="2025-11-24T07:01:29.996743569Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:01:29.997728 containerd[1499]: time="2025-11-24T07:01:29.997687332Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.317108457s" Nov 24 07:01:29.997728 containerd[1499]: time="2025-11-24T07:01:29.997728826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 24 07:01:30.001776 containerd[1499]: time="2025-11-24T07:01:30.001725116Z" level=info msg="CreateContainer within sandbox \"a7cba6224beb50df54719c9c579a74b6f377f057f78ae7861c048eccc9b0e038\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 24 07:01:30.032321 containerd[1499]: time="2025-11-24T07:01:30.032077600Z" level=info msg="Container fb13e2c87d02bc762ff73da6e7631774cf3809b63cb5c412e8484903e3c16869: CDI devices from CRI Config.CDIDevices: []" Nov 24 07:01:30.044618 containerd[1499]: time="2025-11-24T07:01:30.044568478Z" level=info msg="CreateContainer within sandbox \"a7cba6224beb50df54719c9c579a74b6f377f057f78ae7861c048eccc9b0e038\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fb13e2c87d02bc762ff73da6e7631774cf3809b63cb5c412e8484903e3c16869\"" Nov 24 07:01:30.045538 containerd[1499]: time="2025-11-24T07:01:30.045514138Z" level=info msg="StartContainer for \"fb13e2c87d02bc762ff73da6e7631774cf3809b63cb5c412e8484903e3c16869\"" Nov 24 07:01:30.052037 containerd[1499]: time="2025-11-24T07:01:30.051136586Z" level=info msg="connecting to shim fb13e2c87d02bc762ff73da6e7631774cf3809b63cb5c412e8484903e3c16869" address="unix:///run/containerd/s/f1aa231ece507f56f41d4dac08fbac4e11711ef9112ac6cf4a28a4c976deface" protocol=ttrpc version=3 Nov 24 07:01:30.094259 systemd[1]: Started cri-containerd-fb13e2c87d02bc762ff73da6e7631774cf3809b63cb5c412e8484903e3c16869.scope - libcontainer container fb13e2c87d02bc762ff73da6e7631774cf3809b63cb5c412e8484903e3c16869. Nov 24 07:01:30.188544 containerd[1499]: time="2025-11-24T07:01:30.188471300Z" level=info msg="StartContainer for \"fb13e2c87d02bc762ff73da6e7631774cf3809b63cb5c412e8484903e3c16869\" returns successfully" Nov 24 07:01:30.715800 kubelet[2674]: E1124 07:01:30.715738 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:30.995923 systemd[1]: cri-containerd-fb13e2c87d02bc762ff73da6e7631774cf3809b63cb5c412e8484903e3c16869.scope: Deactivated successfully. Nov 24 07:01:30.996667 systemd[1]: cri-containerd-fb13e2c87d02bc762ff73da6e7631774cf3809b63cb5c412e8484903e3c16869.scope: Consumed 787ms CPU time, 162.6M memory peak, 7.7M read from disk, 171.3M written to disk. Nov 24 07:01:31.005431 containerd[1499]: time="2025-11-24T07:01:31.004085439Z" level=info msg="received container exit event container_id:\"fb13e2c87d02bc762ff73da6e7631774cf3809b63cb5c412e8484903e3c16869\" id:\"fb13e2c87d02bc762ff73da6e7631774cf3809b63cb5c412e8484903e3c16869\" pid:3434 exited_at:{seconds:1763967691 nanos:3556374}" Nov 24 07:01:31.046933 kubelet[2674]: I1124 07:01:31.046394 2674 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 24 07:01:31.048702 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb13e2c87d02bc762ff73da6e7631774cf3809b63cb5c412e8484903e3c16869-rootfs.mount: Deactivated successfully. Nov 24 07:01:31.113093 kubelet[2674]: I1124 07:01:31.113035 2674 status_manager.go:890] "Failed to get status for pod" podUID="c402f308-eb4f-4016-b18f-2c146b8746b7" pod="kube-system/coredns-668d6bf9bc-qjngn" err="pods \"coredns-668d6bf9bc-qjngn\" is forbidden: User \"system:node:ci-4459.2.1-b-419a632674\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.2.1-b-419a632674' and this object" Nov 24 07:01:31.115386 kubelet[2674]: W1124 07:01:31.115246 2674 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4459.2.1-b-419a632674" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459.2.1-b-419a632674' and this object Nov 24 07:01:31.119460 kubelet[2674]: E1124 07:01:31.119087 2674 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4459.2.1-b-419a632674\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.2.1-b-419a632674' and this object" logger="UnhandledError" Nov 24 07:01:31.119948 systemd[1]: Created slice kubepods-burstable-podc402f308_eb4f_4016_b18f_2c146b8746b7.slice - libcontainer container kubepods-burstable-podc402f308_eb4f_4016_b18f_2c146b8746b7.slice. Nov 24 07:01:31.145274 systemd[1]: Created slice kubepods-besteffort-pod896e0aae_86f3_4e9a_840a_bc361b5d15a9.slice - libcontainer container kubepods-besteffort-pod896e0aae_86f3_4e9a_840a_bc361b5d15a9.slice. Nov 24 07:01:31.172769 systemd[1]: Created slice kubepods-burstable-pod8cbd5edf_7893_48d4_8ee9_18409fdb58f5.slice - libcontainer container kubepods-burstable-pod8cbd5edf_7893_48d4_8ee9_18409fdb58f5.slice. Nov 24 07:01:31.176468 kubelet[2674]: I1124 07:01:31.176422 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/246e1c4c-d135-4d73-8092-61385bbba6cb-goldmane-ca-bundle\") pod \"goldmane-666569f655-9pgxc\" (UID: \"246e1c4c-d135-4d73-8092-61385bbba6cb\") " pod="calico-system/goldmane-666569f655-9pgxc" Nov 24 07:01:31.176468 kubelet[2674]: I1124 07:01:31.176457 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/246e1c4c-d135-4d73-8092-61385bbba6cb-goldmane-key-pair\") pod \"goldmane-666569f655-9pgxc\" (UID: \"246e1c4c-d135-4d73-8092-61385bbba6cb\") " pod="calico-system/goldmane-666569f655-9pgxc" Nov 24 07:01:31.176468 kubelet[2674]: I1124 07:01:31.176478 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3e5450f-313c-477b-87ac-4da097ca2eb2-tigera-ca-bundle\") pod \"calico-kube-controllers-7f9b89cb9c-ljkq4\" (UID: \"e3e5450f-313c-477b-87ac-4da097ca2eb2\") " pod="calico-system/calico-kube-controllers-7f9b89cb9c-ljkq4" Nov 24 07:01:31.176737 kubelet[2674]: I1124 07:01:31.176495 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8bd2d503-fa71-45be-9f52-ae92f15b3067-calico-apiserver-certs\") pod \"calico-apiserver-66b7cb7b4d-lbdp7\" (UID: \"8bd2d503-fa71-45be-9f52-ae92f15b3067\") " pod="calico-apiserver/calico-apiserver-66b7cb7b4d-lbdp7" Nov 24 07:01:31.176737 kubelet[2674]: I1124 07:01:31.176524 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c402f308-eb4f-4016-b18f-2c146b8746b7-config-volume\") pod \"coredns-668d6bf9bc-qjngn\" (UID: \"c402f308-eb4f-4016-b18f-2c146b8746b7\") " pod="kube-system/coredns-668d6bf9bc-qjngn" Nov 24 07:01:31.176737 kubelet[2674]: I1124 07:01:31.176544 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6ksg\" (UniqueName: \"kubernetes.io/projected/896e0aae-86f3-4e9a-840a-bc361b5d15a9-kube-api-access-q6ksg\") pod \"whisker-b569868b4-rmtth\" (UID: \"896e0aae-86f3-4e9a-840a-bc361b5d15a9\") " pod="calico-system/whisker-b569868b4-rmtth" Nov 24 07:01:31.176737 kubelet[2674]: I1124 07:01:31.176566 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxmdl\" (UniqueName: \"kubernetes.io/projected/c402f308-eb4f-4016-b18f-2c146b8746b7-kube-api-access-zxmdl\") pod \"coredns-668d6bf9bc-qjngn\" (UID: \"c402f308-eb4f-4016-b18f-2c146b8746b7\") " pod="kube-system/coredns-668d6bf9bc-qjngn" Nov 24 07:01:31.176737 kubelet[2674]: I1124 07:01:31.176581 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/246e1c4c-d135-4d73-8092-61385bbba6cb-config\") pod \"goldmane-666569f655-9pgxc\" (UID: \"246e1c4c-d135-4d73-8092-61385bbba6cb\") " pod="calico-system/goldmane-666569f655-9pgxc" Nov 24 07:01:31.176993 kubelet[2674]: I1124 07:01:31.176599 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v24ss\" (UniqueName: \"kubernetes.io/projected/8cbd5edf-7893-48d4-8ee9-18409fdb58f5-kube-api-access-v24ss\") pod \"coredns-668d6bf9bc-fn526\" (UID: \"8cbd5edf-7893-48d4-8ee9-18409fdb58f5\") " pod="kube-system/coredns-668d6bf9bc-fn526" Nov 24 07:01:31.176993 kubelet[2674]: I1124 07:01:31.176621 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkftp\" (UniqueName: \"kubernetes.io/projected/246e1c4c-d135-4d73-8092-61385bbba6cb-kube-api-access-jkftp\") pod \"goldmane-666569f655-9pgxc\" (UID: \"246e1c4c-d135-4d73-8092-61385bbba6cb\") " pod="calico-system/goldmane-666569f655-9pgxc" Nov 24 07:01:31.176993 kubelet[2674]: I1124 07:01:31.176636 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwn5k\" (UniqueName: \"kubernetes.io/projected/8bd2d503-fa71-45be-9f52-ae92f15b3067-kube-api-access-cwn5k\") pod \"calico-apiserver-66b7cb7b4d-lbdp7\" (UID: \"8bd2d503-fa71-45be-9f52-ae92f15b3067\") " pod="calico-apiserver/calico-apiserver-66b7cb7b4d-lbdp7" Nov 24 07:01:31.176993 kubelet[2674]: I1124 07:01:31.176652 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/896e0aae-86f3-4e9a-840a-bc361b5d15a9-whisker-backend-key-pair\") pod \"whisker-b569868b4-rmtth\" (UID: \"896e0aae-86f3-4e9a-840a-bc361b5d15a9\") " pod="calico-system/whisker-b569868b4-rmtth" Nov 24 07:01:31.176993 kubelet[2674]: I1124 07:01:31.176669 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/896e0aae-86f3-4e9a-840a-bc361b5d15a9-whisker-ca-bundle\") pod \"whisker-b569868b4-rmtth\" (UID: \"896e0aae-86f3-4e9a-840a-bc361b5d15a9\") " pod="calico-system/whisker-b569868b4-rmtth" Nov 24 07:01:31.177229 kubelet[2674]: I1124 07:01:31.176684 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cbd5edf-7893-48d4-8ee9-18409fdb58f5-config-volume\") pod \"coredns-668d6bf9bc-fn526\" (UID: \"8cbd5edf-7893-48d4-8ee9-18409fdb58f5\") " pod="kube-system/coredns-668d6bf9bc-fn526" Nov 24 07:01:31.177229 kubelet[2674]: I1124 07:01:31.176703 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z984\" (UniqueName: \"kubernetes.io/projected/e3e5450f-313c-477b-87ac-4da097ca2eb2-kube-api-access-8z984\") pod \"calico-kube-controllers-7f9b89cb9c-ljkq4\" (UID: \"e3e5450f-313c-477b-87ac-4da097ca2eb2\") " pod="calico-system/calico-kube-controllers-7f9b89cb9c-ljkq4" Nov 24 07:01:31.186529 systemd[1]: Created slice kubepods-besteffort-pode3e5450f_313c_477b_87ac_4da097ca2eb2.slice - libcontainer container kubepods-besteffort-pode3e5450f_313c_477b_87ac_4da097ca2eb2.slice. Nov 24 07:01:31.204881 systemd[1]: Created slice kubepods-besteffort-pod246e1c4c_d135_4d73_8092_61385bbba6cb.slice - libcontainer container kubepods-besteffort-pod246e1c4c_d135_4d73_8092_61385bbba6cb.slice. Nov 24 07:01:31.218981 systemd[1]: Created slice kubepods-besteffort-pod8bd2d503_fa71_45be_9f52_ae92f15b3067.slice - libcontainer container kubepods-besteffort-pod8bd2d503_fa71_45be_9f52_ae92f15b3067.slice. Nov 24 07:01:31.236587 systemd[1]: Created slice kubepods-besteffort-pode713eb78_8f4a_4fad_881a_0e37cd3c7e10.slice - libcontainer container kubepods-besteffort-pode713eb78_8f4a_4fad_881a_0e37cd3c7e10.slice. Nov 24 07:01:31.277443 kubelet[2674]: I1124 07:01:31.277272 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e713eb78-8f4a-4fad-881a-0e37cd3c7e10-calico-apiserver-certs\") pod \"calico-apiserver-66b7cb7b4d-2jnsq\" (UID: \"e713eb78-8f4a-4fad-881a-0e37cd3c7e10\") " pod="calico-apiserver/calico-apiserver-66b7cb7b4d-2jnsq" Nov 24 07:01:31.282940 kubelet[2674]: I1124 07:01:31.282859 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cfnz\" (UniqueName: \"kubernetes.io/projected/e713eb78-8f4a-4fad-881a-0e37cd3c7e10-kube-api-access-9cfnz\") pod \"calico-apiserver-66b7cb7b4d-2jnsq\" (UID: \"e713eb78-8f4a-4fad-881a-0e37cd3c7e10\") " pod="calico-apiserver/calico-apiserver-66b7cb7b4d-2jnsq" Nov 24 07:01:31.457280 containerd[1499]: time="2025-11-24T07:01:31.457230071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b569868b4-rmtth,Uid:896e0aae-86f3-4e9a-840a-bc361b5d15a9,Namespace:calico-system,Attempt:0,}" Nov 24 07:01:31.496928 containerd[1499]: time="2025-11-24T07:01:31.496247931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9b89cb9c-ljkq4,Uid:e3e5450f-313c-477b-87ac-4da097ca2eb2,Namespace:calico-system,Attempt:0,}" Nov 24 07:01:31.517839 containerd[1499]: time="2025-11-24T07:01:31.517360973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9pgxc,Uid:246e1c4c-d135-4d73-8092-61385bbba6cb,Namespace:calico-system,Attempt:0,}" Nov 24 07:01:31.530363 containerd[1499]: time="2025-11-24T07:01:31.530223806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b7cb7b4d-lbdp7,Uid:8bd2d503-fa71-45be-9f52-ae92f15b3067,Namespace:calico-apiserver,Attempt:0,}" Nov 24 07:01:31.542809 systemd[1]: Created slice kubepods-besteffort-podc34e73b2_4364_4998_a5a0_398cc36c9e15.slice - libcontainer container kubepods-besteffort-podc34e73b2_4364_4998_a5a0_398cc36c9e15.slice. Nov 24 07:01:31.554828 containerd[1499]: time="2025-11-24T07:01:31.554781375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xq9xq,Uid:c34e73b2-4364-4998-a5a0-398cc36c9e15,Namespace:calico-system,Attempt:0,}" Nov 24 07:01:31.556362 containerd[1499]: time="2025-11-24T07:01:31.556321664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b7cb7b4d-2jnsq,Uid:e713eb78-8f4a-4fad-881a-0e37cd3c7e10,Namespace:calico-apiserver,Attempt:0,}" Nov 24 07:01:31.741749 kubelet[2674]: E1124 07:01:31.741519 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:31.790071 containerd[1499]: time="2025-11-24T07:01:31.789674103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 24 07:01:31.837675 containerd[1499]: time="2025-11-24T07:01:31.837624855Z" level=error msg="Failed to destroy network for sandbox \"7f66d8d72d5881e936ef1efb2526b901d03601d1a1ed96af776ffda0b58087dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 07:01:31.844628 containerd[1499]: time="2025-11-24T07:01:31.844578054Z" level=error msg="Failed to destroy network for sandbox \"508d6dc66d4d4fdb021149db6b33aab29c8a13287e46f08ea8a31ba45e2e3db2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 07:01:31.869887 containerd[1499]: time="2025-11-24T07:01:31.844822025Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b569868b4-rmtth,Uid:896e0aae-86f3-4e9a-840a-bc361b5d15a9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f66d8d72d5881e936ef1efb2526b901d03601d1a1ed96af776ffda0b58087dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 07:01:31.870606 containerd[1499]: time="2025-11-24T07:01:31.859074898Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b7cb7b4d-lbdp7,Uid:8bd2d503-fa71-45be-9f52-ae92f15b3067,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"508d6dc66d4d4fdb021149db6b33aab29c8a13287e46f08ea8a31ba45e2e3db2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 07:01:31.881626 containerd[1499]: time="2025-11-24T07:01:31.880614349Z" level=error msg="Failed to destroy network for sandbox \"1deeae4c45344ebb5f2fe0f389dce59c4ab36e059999db8a50b3550123a8ae45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 07:01:31.881626 containerd[1499]: time="2025-11-24T07:01:31.861110604Z" level=error msg="Failed to destroy network for sandbox \"dc235addc643611c1d9870280f77736dbb88e626e8241e36e19ec9c3328a4b27\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 07:01:31.882514 kubelet[2674]: E1124 07:01:31.881151 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f66d8d72d5881e936ef1efb2526b901d03601d1a1ed96af776ffda0b58087dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 07:01:31.882514 kubelet[2674]: E1124 07:01:31.881234 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f66d8d72d5881e936ef1efb2526b901d03601d1a1ed96af776ffda0b58087dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b569868b4-rmtth" Nov 24 07:01:31.882514 kubelet[2674]: E1124 07:01:31.881260 2674 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f66d8d72d5881e936ef1efb2526b901d03601d1a1ed96af776ffda0b58087dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b569868b4-rmtth" Nov 24 07:01:31.883217 containerd[1499]: time="2025-11-24T07:01:31.882461362Z" level=error msg="Failed to destroy network for sandbox \"3898a8a23aa77630496524ff46d9620e24fdeeea99da32ce3508fcd90203ca87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 07:01:31.883428 kubelet[2674]: E1124 07:01:31.881315 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-b569868b4-rmtth_calico-system(896e0aae-86f3-4e9a-840a-bc361b5d15a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-b569868b4-rmtth_calico-system(896e0aae-86f3-4e9a-840a-bc361b5d15a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f66d8d72d5881e936ef1efb2526b901d03601d1a1ed96af776ffda0b58087dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b569868b4-rmtth" podUID="896e0aae-86f3-4e9a-840a-bc361b5d15a9" Nov 24 07:01:31.883428 kubelet[2674]: E1124 07:01:31.882413 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"508d6dc66d4d4fdb021149db6b33aab29c8a13287e46f08ea8a31ba45e2e3db2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 07:01:31.883428 kubelet[2674]: E1124 07:01:31.882658 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"508d6dc66d4d4fdb021149db6b33aab29c8a13287e46f08ea8a31ba45e2e3db2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66b7cb7b4d-lbdp7" Nov 24 07:01:31.884671 kubelet[2674]: E1124 07:01:31.882684 2674 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"508d6dc66d4d4fdb021149db6b33aab29c8a13287e46f08ea8a31ba45e2e3db2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66b7cb7b4d-lbdp7" Nov 24 07:01:31.884671 kubelet[2674]: E1124 07:01:31.882831 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66b7cb7b4d-lbdp7_calico-apiserver(8bd2d503-fa71-45be-9f52-ae92f15b3067)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66b7cb7b4d-lbdp7_calico-apiserver(8bd2d503-fa71-45be-9f52-ae92f15b3067)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"508d6dc66d4d4fdb021149db6b33aab29c8a13287e46f08ea8a31ba45e2e3db2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66b7cb7b4d-lbdp7" podUID="8bd2d503-fa71-45be-9f52-ae92f15b3067" Nov 24 07:01:31.885959 containerd[1499]: time="2025-11-24T07:01:31.885847741Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9pgxc,Uid:246e1c4c-d135-4d73-8092-61385bbba6cb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1deeae4c45344ebb5f2fe0f389dce59c4ab36e059999db8a50b3550123a8ae45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 07:01:31.887435 kubelet[2674]: E1124 07:01:31.887370 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1deeae4c45344ebb5f2fe0f389dce59c4ab36e059999db8a50b3550123a8ae45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 07:01:31.887652 kubelet[2674]: E1124 07:01:31.887627 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1deeae4c45344ebb5f2fe0f389dce59c4ab36e059999db8a50b3550123a8ae45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9pgxc" Nov 24 07:01:31.888601 containerd[1499]: time="2025-11-24T07:01:31.887788400Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b7cb7b4d-2jnsq,Uid:e713eb78-8f4a-4fad-881a-0e37cd3c7e10,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc235addc643611c1d9870280f77736dbb88e626e8241e36e19ec9c3328a4b27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 07:01:31.888601 containerd[1499]: time="2025-11-24T07:01:31.888197345Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9b89cb9c-ljkq4,Uid:e3e5450f-313c-477b-87ac-4da097ca2eb2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3898a8a23aa77630496524ff46d9620e24fdeeea99da32ce3508fcd90203ca87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 07:01:31.888754 kubelet[2674]: E1124 07:01:31.887930 2674 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1deeae4c45344ebb5f2fe0f389dce59c4ab36e059999db8a50b3550123a8ae45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9pgxc" Nov 24 07:01:31.888754 kubelet[2674]: E1124 07:01:31.887993 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-9pgxc_calico-system(246e1c4c-d135-4d73-8092-61385bbba6cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-9pgxc_calico-system(246e1c4c-d135-4d73-8092-61385bbba6cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1deeae4c45344ebb5f2fe0f389dce59c4ab36e059999db8a50b3550123a8ae45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-9pgxc" podUID="246e1c4c-d135-4d73-8092-61385bbba6cb" Nov 24 07:01:31.888754 kubelet[2674]: E1124 07:01:31.888093 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc235addc643611c1d9870280f77736dbb88e626e8241e36e19ec9c3328a4b27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 07:01:31.888865 kubelet[2674]: E1124 07:01:31.888119 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc235addc643611c1d9870280f77736dbb88e626e8241e36e19ec9c3328a4b27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66b7cb7b4d-2jnsq" Nov 24 07:01:31.888865 kubelet[2674]: E1124 07:01:31.888136 2674 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc235addc643611c1d9870280f77736dbb88e626e8241e36e19ec9c3328a4b27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66b7cb7b4d-2jnsq" Nov 24 07:01:31.888865 kubelet[2674]: E1124 07:01:31.888159 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66b7cb7b4d-2jnsq_calico-apiserver(e713eb78-8f4a-4fad-881a-0e37cd3c7e10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66b7cb7b4d-2jnsq_calico-apiserver(e713eb78-8f4a-4fad-881a-0e37cd3c7e10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc235addc643611c1d9870280f77736dbb88e626e8241e36e19ec9c3328a4b27\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66b7cb7b4d-2jnsq" podUID="e713eb78-8f4a-4fad-881a-0e37cd3c7e10" Nov 24 07:01:31.888972 kubelet[2674]: E1124 07:01:31.888362 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3898a8a23aa77630496524ff46d9620e24fdeeea99da32ce3508fcd90203ca87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 07:01:31.888972 kubelet[2674]: E1124 07:01:31.888385 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3898a8a23aa77630496524ff46d9620e24fdeeea99da32ce3508fcd90203ca87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9b89cb9c-ljkq4" Nov 24 07:01:31.888972 kubelet[2674]: E1124 07:01:31.888399 2674 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3898a8a23aa77630496524ff46d9620e24fdeeea99da32ce3508fcd90203ca87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9b89cb9c-ljkq4" Nov 24 07:01:31.889048 kubelet[2674]: E1124 07:01:31.888423 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f9b89cb9c-ljkq4_calico-system(e3e5450f-313c-477b-87ac-4da097ca2eb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f9b89cb9c-ljkq4_calico-system(e3e5450f-313c-477b-87ac-4da097ca2eb2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3898a8a23aa77630496524ff46d9620e24fdeeea99da32ce3508fcd90203ca87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f9b89cb9c-ljkq4" podUID="e3e5450f-313c-477b-87ac-4da097ca2eb2" Nov 24 07:01:31.897631 containerd[1499]: time="2025-11-24T07:01:31.897584182Z" level=error msg="Failed to destroy network for sandbox \"7daadedd56f37f896bf9ae67e6e3806e460195fd0aa5a5843d3b18d1132730ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 07:01:31.900733 containerd[1499]: time="2025-11-24T07:01:31.900545758Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xq9xq,Uid:c34e73b2-4364-4998-a5a0-398cc36c9e15,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7daadedd56f37f896bf9ae67e6e3806e460195fd0aa5a5843d3b18d1132730ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 07:01:31.901742 kubelet[2674]: E1124 07:01:31.901421 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7daadedd56f37f896bf9ae67e6e3806e460195fd0aa5a5843d3b18d1132730ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 07:01:31.901742 kubelet[2674]: E1124 07:01:31.901517 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7daadedd56f37f896bf9ae67e6e3806e460195fd0aa5a5843d3b18d1132730ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xq9xq" Nov 24 07:01:31.901742 kubelet[2674]: E1124 07:01:31.901568 2674 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7daadedd56f37f896bf9ae67e6e3806e460195fd0aa5a5843d3b18d1132730ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xq9xq" Nov 24 07:01:31.902150 kubelet[2674]: E1124 07:01:31.901650 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xq9xq_calico-system(c34e73b2-4364-4998-a5a0-398cc36c9e15)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xq9xq_calico-system(c34e73b2-4364-4998-a5a0-398cc36c9e15)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7daadedd56f37f896bf9ae67e6e3806e460195fd0aa5a5843d3b18d1132730ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xq9xq" podUID="c34e73b2-4364-4998-a5a0-398cc36c9e15" Nov 24 07:01:32.280802 kubelet[2674]: E1124 07:01:32.280298 2674 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Nov 24 07:01:32.280802 kubelet[2674]: E1124 07:01:32.280423 2674 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c402f308-eb4f-4016-b18f-2c146b8746b7-config-volume podName:c402f308-eb4f-4016-b18f-2c146b8746b7 nodeName:}" failed. No retries permitted until 2025-11-24 07:01:32.780399125 +0000 UTC m=+33.444563964 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c402f308-eb4f-4016-b18f-2c146b8746b7-config-volume") pod "coredns-668d6bf9bc-qjngn" (UID: "c402f308-eb4f-4016-b18f-2c146b8746b7") : failed to sync configmap cache: timed out waiting for the condition Nov 24 07:01:32.286467 kubelet[2674]: E1124 07:01:32.286416 2674 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Nov 24 07:01:32.287033 kubelet[2674]: E1124 07:01:32.286753 2674 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8cbd5edf-7893-48d4-8ee9-18409fdb58f5-config-volume podName:8cbd5edf-7893-48d4-8ee9-18409fdb58f5 nodeName:}" failed. No retries permitted until 2025-11-24 07:01:32.786726905 +0000 UTC m=+33.450891728 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8cbd5edf-7893-48d4-8ee9-18409fdb58f5-config-volume") pod "coredns-668d6bf9bc-fn526" (UID: "8cbd5edf-7893-48d4-8ee9-18409fdb58f5") : failed to sync configmap cache: timed out waiting for the condition Nov 24 07:01:32.936877 kubelet[2674]: E1124 07:01:32.936738 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:32.939287 containerd[1499]: time="2025-11-24T07:01:32.939209199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qjngn,Uid:c402f308-eb4f-4016-b18f-2c146b8746b7,Namespace:kube-system,Attempt:0,}" Nov 24 07:01:32.982038 kubelet[2674]: E1124 07:01:32.981866 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:32.985154 containerd[1499]: time="2025-11-24T07:01:32.985080923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fn526,Uid:8cbd5edf-7893-48d4-8ee9-18409fdb58f5,Namespace:kube-system,Attempt:0,}" Nov 24 07:01:33.259150 containerd[1499]: time="2025-11-24T07:01:33.258948893Z" level=error msg="Failed to destroy network for sandbox \"2603d1ef077f5fbbd45db2ebfc15fa78f633ddb3f76009ae4246185774866152\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 07:01:33.264773 systemd[1]: run-netns-cni\x2de65b3863\x2d7b17\x2d2568\x2deb8b\x2d33883ec9292a.mount: Deactivated successfully. Nov 24 07:01:33.271496 containerd[1499]: time="2025-11-24T07:01:33.271427934Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fn526,Uid:8cbd5edf-7893-48d4-8ee9-18409fdb58f5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2603d1ef077f5fbbd45db2ebfc15fa78f633ddb3f76009ae4246185774866152\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 07:01:33.273385 kubelet[2674]: E1124 07:01:33.272982 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2603d1ef077f5fbbd45db2ebfc15fa78f633ddb3f76009ae4246185774866152\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 07:01:33.273385 kubelet[2674]: E1124 07:01:33.273122 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2603d1ef077f5fbbd45db2ebfc15fa78f633ddb3f76009ae4246185774866152\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fn526" Nov 24 07:01:33.273385 kubelet[2674]: E1124 07:01:33.273160 2674 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2603d1ef077f5fbbd45db2ebfc15fa78f633ddb3f76009ae4246185774866152\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fn526" Nov 24 07:01:33.273840 kubelet[2674]: E1124 07:01:33.273271 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-fn526_kube-system(8cbd5edf-7893-48d4-8ee9-18409fdb58f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-fn526_kube-system(8cbd5edf-7893-48d4-8ee9-18409fdb58f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2603d1ef077f5fbbd45db2ebfc15fa78f633ddb3f76009ae4246185774866152\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fn526" podUID="8cbd5edf-7893-48d4-8ee9-18409fdb58f5" Nov 24 07:01:33.289425 containerd[1499]: time="2025-11-24T07:01:33.289357501Z" level=error msg="Failed to destroy network for sandbox \"c10eb06f6f472f11791f3f460be5c31fb7c2fef901750df0a1aa7b94977b1833\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 07:01:33.294294 systemd[1]: run-netns-cni\x2d9cc38398\x2d44a5\x2d93c1\x2d4fc7\x2de8a979b42591.mount: Deactivated successfully. Nov 24 07:01:33.297763 containerd[1499]: time="2025-11-24T07:01:33.297676200Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qjngn,Uid:c402f308-eb4f-4016-b18f-2c146b8746b7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c10eb06f6f472f11791f3f460be5c31fb7c2fef901750df0a1aa7b94977b1833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 07:01:33.300962 kubelet[2674]: E1124 07:01:33.299088 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c10eb06f6f472f11791f3f460be5c31fb7c2fef901750df0a1aa7b94977b1833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 07:01:33.300962 kubelet[2674]: E1124 07:01:33.299154 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c10eb06f6f472f11791f3f460be5c31fb7c2fef901750df0a1aa7b94977b1833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qjngn" Nov 24 07:01:33.300962 kubelet[2674]: E1124 07:01:33.299184 2674 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c10eb06f6f472f11791f3f460be5c31fb7c2fef901750df0a1aa7b94977b1833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qjngn" Nov 24 07:01:33.301326 kubelet[2674]: E1124 07:01:33.299227 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qjngn_kube-system(c402f308-eb4f-4016-b18f-2c146b8746b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qjngn_kube-system(c402f308-eb4f-4016-b18f-2c146b8746b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c10eb06f6f472f11791f3f460be5c31fb7c2fef901750df0a1aa7b94977b1833\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qjngn" podUID="c402f308-eb4f-4016-b18f-2c146b8746b7" Nov 24 07:01:37.894104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount976634658.mount: Deactivated successfully. Nov 24 07:01:37.925697 containerd[1499]: time="2025-11-24T07:01:37.925579185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:01:37.927363 containerd[1499]: time="2025-11-24T07:01:37.927304061Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 24 07:01:37.928998 containerd[1499]: time="2025-11-24T07:01:37.928951848Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:01:37.931573 containerd[1499]: time="2025-11-24T07:01:37.930865549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 07:01:37.932567 containerd[1499]: time="2025-11-24T07:01:37.931525572Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.14180001s" Nov 24 07:01:37.932567 containerd[1499]: time="2025-11-24T07:01:37.932321599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 24 07:01:37.973730 containerd[1499]: time="2025-11-24T07:01:37.973668373Z" level=info msg="CreateContainer within sandbox \"a7cba6224beb50df54719c9c579a74b6f377f057f78ae7861c048eccc9b0e038\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 24 07:01:37.990204 containerd[1499]: time="2025-11-24T07:01:37.989013150Z" level=info msg="Container feaec7e3f75d22c58549a70ab274a41d60922bb4a2cbdc663266364b72202ce3: CDI devices from CRI Config.CDIDevices: []" Nov 24 07:01:37.995201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount484877768.mount: Deactivated successfully. Nov 24 07:01:38.061916 containerd[1499]: time="2025-11-24T07:01:38.061748062Z" level=info msg="CreateContainer within sandbox \"a7cba6224beb50df54719c9c579a74b6f377f057f78ae7861c048eccc9b0e038\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"feaec7e3f75d22c58549a70ab274a41d60922bb4a2cbdc663266364b72202ce3\"" Nov 24 07:01:38.064978 containerd[1499]: time="2025-11-24T07:01:38.063087922Z" level=info msg="StartContainer for \"feaec7e3f75d22c58549a70ab274a41d60922bb4a2cbdc663266364b72202ce3\"" Nov 24 07:01:38.066004 containerd[1499]: time="2025-11-24T07:01:38.065652963Z" level=info msg="connecting to shim feaec7e3f75d22c58549a70ab274a41d60922bb4a2cbdc663266364b72202ce3" address="unix:///run/containerd/s/f1aa231ece507f56f41d4dac08fbac4e11711ef9112ac6cf4a28a4c976deface" protocol=ttrpc version=3 Nov 24 07:01:38.228237 systemd[1]: Started cri-containerd-feaec7e3f75d22c58549a70ab274a41d60922bb4a2cbdc663266364b72202ce3.scope - libcontainer container feaec7e3f75d22c58549a70ab274a41d60922bb4a2cbdc663266364b72202ce3. Nov 24 07:01:38.399996 containerd[1499]: time="2025-11-24T07:01:38.399932431Z" level=info msg="StartContainer for \"feaec7e3f75d22c58549a70ab274a41d60922bb4a2cbdc663266364b72202ce3\" returns successfully" Nov 24 07:01:38.515208 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 24 07:01:38.515374 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 24 07:01:38.764601 kubelet[2674]: I1124 07:01:38.764125 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/896e0aae-86f3-4e9a-840a-bc361b5d15a9-whisker-backend-key-pair\") pod \"896e0aae-86f3-4e9a-840a-bc361b5d15a9\" (UID: \"896e0aae-86f3-4e9a-840a-bc361b5d15a9\") " Nov 24 07:01:38.766140 kubelet[2674]: I1124 07:01:38.765274 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/896e0aae-86f3-4e9a-840a-bc361b5d15a9-whisker-ca-bundle\") pod \"896e0aae-86f3-4e9a-840a-bc361b5d15a9\" (UID: \"896e0aae-86f3-4e9a-840a-bc361b5d15a9\") " Nov 24 07:01:38.766140 kubelet[2674]: I1124 07:01:38.765338 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6ksg\" (UniqueName: \"kubernetes.io/projected/896e0aae-86f3-4e9a-840a-bc361b5d15a9-kube-api-access-q6ksg\") pod \"896e0aae-86f3-4e9a-840a-bc361b5d15a9\" (UID: \"896e0aae-86f3-4e9a-840a-bc361b5d15a9\") " Nov 24 07:01:38.777824 kubelet[2674]: I1124 07:01:38.777675 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/896e0aae-86f3-4e9a-840a-bc361b5d15a9-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "896e0aae-86f3-4e9a-840a-bc361b5d15a9" (UID: "896e0aae-86f3-4e9a-840a-bc361b5d15a9"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 24 07:01:38.778867 kubelet[2674]: I1124 07:01:38.778569 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/896e0aae-86f3-4e9a-840a-bc361b5d15a9-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "896e0aae-86f3-4e9a-840a-bc361b5d15a9" (UID: "896e0aae-86f3-4e9a-840a-bc361b5d15a9"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 24 07:01:38.787485 kubelet[2674]: I1124 07:01:38.787432 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/896e0aae-86f3-4e9a-840a-bc361b5d15a9-kube-api-access-q6ksg" (OuterVolumeSpecName: "kube-api-access-q6ksg") pod "896e0aae-86f3-4e9a-840a-bc361b5d15a9" (UID: "896e0aae-86f3-4e9a-840a-bc361b5d15a9"). InnerVolumeSpecName "kube-api-access-q6ksg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 24 07:01:38.801655 kubelet[2674]: E1124 07:01:38.801609 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:38.809131 systemd[1]: Removed slice kubepods-besteffort-pod896e0aae_86f3_4e9a_840a_bc361b5d15a9.slice - libcontainer container kubepods-besteffort-pod896e0aae_86f3_4e9a_840a_bc361b5d15a9.slice. Nov 24 07:01:38.847475 kubelet[2674]: I1124 07:01:38.847402 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5czdb" podStartSLOduration=1.738929784 podStartE2EDuration="17.847379936s" podCreationTimestamp="2025-11-24 07:01:21 +0000 UTC" firstStartedPulling="2025-11-24 07:01:21.824789369 +0000 UTC m=+22.488954206" lastFinishedPulling="2025-11-24 07:01:37.933239523 +0000 UTC m=+38.597404358" observedRunningTime="2025-11-24 07:01:38.843094297 +0000 UTC m=+39.507259158" watchObservedRunningTime="2025-11-24 07:01:38.847379936 +0000 UTC m=+39.511544782" Nov 24 07:01:38.866295 kubelet[2674]: I1124 07:01:38.866232 2674 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/896e0aae-86f3-4e9a-840a-bc361b5d15a9-whisker-backend-key-pair\") on node \"ci-4459.2.1-b-419a632674\" DevicePath \"\"" Nov 24 07:01:38.866839 kubelet[2674]: I1124 07:01:38.866267 2674 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/896e0aae-86f3-4e9a-840a-bc361b5d15a9-whisker-ca-bundle\") on node \"ci-4459.2.1-b-419a632674\" DevicePath \"\"" Nov 24 07:01:38.866839 kubelet[2674]: I1124 07:01:38.866593 2674 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q6ksg\" (UniqueName: \"kubernetes.io/projected/896e0aae-86f3-4e9a-840a-bc361b5d15a9-kube-api-access-q6ksg\") on node \"ci-4459.2.1-b-419a632674\" DevicePath \"\"" Nov 24 07:01:38.896485 systemd[1]: var-lib-kubelet-pods-896e0aae\x2d86f3\x2d4e9a\x2d840a\x2dbc361b5d15a9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq6ksg.mount: Deactivated successfully. Nov 24 07:01:38.896596 systemd[1]: var-lib-kubelet-pods-896e0aae\x2d86f3\x2d4e9a\x2d840a\x2dbc361b5d15a9-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 24 07:01:38.954610 systemd[1]: Created slice kubepods-besteffort-pod45848e32_6c02_41b9_837d_21663011857a.slice - libcontainer container kubepods-besteffort-pod45848e32_6c02_41b9_837d_21663011857a.slice. Nov 24 07:01:39.068562 kubelet[2674]: I1124 07:01:39.067789 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/45848e32-6c02-41b9-837d-21663011857a-whisker-backend-key-pair\") pod \"whisker-865bc4d9cd-wsm2s\" (UID: \"45848e32-6c02-41b9-837d-21663011857a\") " pod="calico-system/whisker-865bc4d9cd-wsm2s" Nov 24 07:01:39.068562 kubelet[2674]: I1124 07:01:39.067847 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c6h5\" (UniqueName: \"kubernetes.io/projected/45848e32-6c02-41b9-837d-21663011857a-kube-api-access-7c6h5\") pod \"whisker-865bc4d9cd-wsm2s\" (UID: \"45848e32-6c02-41b9-837d-21663011857a\") " pod="calico-system/whisker-865bc4d9cd-wsm2s" Nov 24 07:01:39.068562 kubelet[2674]: I1124 07:01:39.067876 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45848e32-6c02-41b9-837d-21663011857a-whisker-ca-bundle\") pod \"whisker-865bc4d9cd-wsm2s\" (UID: \"45848e32-6c02-41b9-837d-21663011857a\") " pod="calico-system/whisker-865bc4d9cd-wsm2s" Nov 24 07:01:39.262854 containerd[1499]: time="2025-11-24T07:01:39.262773995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-865bc4d9cd-wsm2s,Uid:45848e32-6c02-41b9-837d-21663011857a,Namespace:calico-system,Attempt:0,}" Nov 24 07:01:39.532519 kubelet[2674]: I1124 07:01:39.532470 2674 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="896e0aae-86f3-4e9a-840a-bc361b5d15a9" path="/var/lib/kubelet/pods/896e0aae-86f3-4e9a-840a-bc361b5d15a9/volumes" Nov 24 07:01:39.593788 systemd-networkd[1423]: calib91c67a3f0d: Link UP Nov 24 07:01:39.594470 systemd-networkd[1423]: calib91c67a3f0d: Gained carrier Nov 24 07:01:39.615839 containerd[1499]: 2025-11-24 07:01:39.300 [INFO][3757] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 07:01:39.615839 containerd[1499]: 2025-11-24 07:01:39.339 [INFO][3757] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--b--419a632674-k8s-whisker--865bc4d9cd--wsm2s-eth0 whisker-865bc4d9cd- calico-system 45848e32-6c02-41b9-837d-21663011857a 936 0 2025-11-24 07:01:38 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:865bc4d9cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.2.1-b-419a632674 whisker-865bc4d9cd-wsm2s eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib91c67a3f0d [] [] }} ContainerID="329fb99fd4d234205b7ce2c54f0a150741762e48e9a0930316afadad8a2fd83a" Namespace="calico-system" Pod="whisker-865bc4d9cd-wsm2s" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-whisker--865bc4d9cd--wsm2s-" Nov 24 07:01:39.615839 containerd[1499]: 2025-11-24 07:01:39.339 [INFO][3757] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="329fb99fd4d234205b7ce2c54f0a150741762e48e9a0930316afadad8a2fd83a" Namespace="calico-system" Pod="whisker-865bc4d9cd-wsm2s" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-whisker--865bc4d9cd--wsm2s-eth0" Nov 24 07:01:39.615839 containerd[1499]: 2025-11-24 07:01:39.499 [INFO][3769] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="329fb99fd4d234205b7ce2c54f0a150741762e48e9a0930316afadad8a2fd83a" HandleID="k8s-pod-network.329fb99fd4d234205b7ce2c54f0a150741762e48e9a0930316afadad8a2fd83a" Workload="ci--4459.2.1--b--419a632674-k8s-whisker--865bc4d9cd--wsm2s-eth0" Nov 24 07:01:39.618218 containerd[1499]: 2025-11-24 07:01:39.502 [INFO][3769] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="329fb99fd4d234205b7ce2c54f0a150741762e48e9a0930316afadad8a2fd83a" HandleID="k8s-pod-network.329fb99fd4d234205b7ce2c54f0a150741762e48e9a0930316afadad8a2fd83a" Workload="ci--4459.2.1--b--419a632674-k8s-whisker--865bc4d9cd--wsm2s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003850f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.1-b-419a632674", "pod":"whisker-865bc4d9cd-wsm2s", "timestamp":"2025-11-24 07:01:39.499540209 +0000 UTC"}, Hostname:"ci-4459.2.1-b-419a632674", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 07:01:39.618218 containerd[1499]: 2025-11-24 07:01:39.502 [INFO][3769] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 07:01:39.618218 containerd[1499]: 2025-11-24 07:01:39.503 [INFO][3769] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 07:01:39.618218 containerd[1499]: 2025-11-24 07:01:39.503 [INFO][3769] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-b-419a632674' Nov 24 07:01:39.618218 containerd[1499]: 2025-11-24 07:01:39.519 [INFO][3769] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.329fb99fd4d234205b7ce2c54f0a150741762e48e9a0930316afadad8a2fd83a" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:39.618218 containerd[1499]: 2025-11-24 07:01:39.536 [INFO][3769] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-b-419a632674" Nov 24 07:01:39.618218 containerd[1499]: 2025-11-24 07:01:39.543 [INFO][3769] ipam/ipam.go 511: Trying affinity for 192.168.18.192/26 host="ci-4459.2.1-b-419a632674" Nov 24 07:01:39.618218 containerd[1499]: 2025-11-24 07:01:39.547 [INFO][3769] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.192/26 host="ci-4459.2.1-b-419a632674" Nov 24 07:01:39.618218 containerd[1499]: 2025-11-24 07:01:39.550 [INFO][3769] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="ci-4459.2.1-b-419a632674" Nov 24 07:01:39.618619 containerd[1499]: 2025-11-24 07:01:39.551 [INFO][3769] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.329fb99fd4d234205b7ce2c54f0a150741762e48e9a0930316afadad8a2fd83a" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:39.618619 containerd[1499]: 2025-11-24 07:01:39.553 [INFO][3769] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.329fb99fd4d234205b7ce2c54f0a150741762e48e9a0930316afadad8a2fd83a Nov 24 07:01:39.618619 containerd[1499]: 2025-11-24 07:01:39.558 [INFO][3769] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.329fb99fd4d234205b7ce2c54f0a150741762e48e9a0930316afadad8a2fd83a" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:39.618619 containerd[1499]: 2025-11-24 07:01:39.574 [INFO][3769] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.18.193/26] block=192.168.18.192/26 handle="k8s-pod-network.329fb99fd4d234205b7ce2c54f0a150741762e48e9a0930316afadad8a2fd83a" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:39.618619 containerd[1499]: 2025-11-24 07:01:39.574 [INFO][3769] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.193/26] handle="k8s-pod-network.329fb99fd4d234205b7ce2c54f0a150741762e48e9a0930316afadad8a2fd83a" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:39.618619 containerd[1499]: 2025-11-24 07:01:39.574 [INFO][3769] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 07:01:39.618619 containerd[1499]: 2025-11-24 07:01:39.574 [INFO][3769] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.18.193/26] IPv6=[] ContainerID="329fb99fd4d234205b7ce2c54f0a150741762e48e9a0930316afadad8a2fd83a" HandleID="k8s-pod-network.329fb99fd4d234205b7ce2c54f0a150741762e48e9a0930316afadad8a2fd83a" Workload="ci--4459.2.1--b--419a632674-k8s-whisker--865bc4d9cd--wsm2s-eth0" Nov 24 07:01:39.618785 containerd[1499]: 2025-11-24 07:01:39.578 [INFO][3757] cni-plugin/k8s.go 418: Populated endpoint ContainerID="329fb99fd4d234205b7ce2c54f0a150741762e48e9a0930316afadad8a2fd83a" Namespace="calico-system" Pod="whisker-865bc4d9cd-wsm2s" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-whisker--865bc4d9cd--wsm2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--b--419a632674-k8s-whisker--865bc4d9cd--wsm2s-eth0", GenerateName:"whisker-865bc4d9cd-", Namespace:"calico-system", SelfLink:"", UID:"45848e32-6c02-41b9-837d-21663011857a", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 7, 1, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"865bc4d9cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-b-419a632674", ContainerID:"", Pod:"whisker-865bc4d9cd-wsm2s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.18.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib91c67a3f0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 07:01:39.618785 containerd[1499]: 2025-11-24 07:01:39.578 [INFO][3757] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.193/32] ContainerID="329fb99fd4d234205b7ce2c54f0a150741762e48e9a0930316afadad8a2fd83a" Namespace="calico-system" Pod="whisker-865bc4d9cd-wsm2s" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-whisker--865bc4d9cd--wsm2s-eth0" Nov 24 07:01:39.618874 containerd[1499]: 2025-11-24 07:01:39.578 [INFO][3757] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib91c67a3f0d ContainerID="329fb99fd4d234205b7ce2c54f0a150741762e48e9a0930316afadad8a2fd83a" Namespace="calico-system" Pod="whisker-865bc4d9cd-wsm2s" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-whisker--865bc4d9cd--wsm2s-eth0" Nov 24 07:01:39.618874 containerd[1499]: 2025-11-24 07:01:39.595 [INFO][3757] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="329fb99fd4d234205b7ce2c54f0a150741762e48e9a0930316afadad8a2fd83a" Namespace="calico-system" Pod="whisker-865bc4d9cd-wsm2s" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-whisker--865bc4d9cd--wsm2s-eth0" Nov 24 07:01:39.619295 containerd[1499]: 2025-11-24 07:01:39.595 [INFO][3757] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="329fb99fd4d234205b7ce2c54f0a150741762e48e9a0930316afadad8a2fd83a" Namespace="calico-system" Pod="whisker-865bc4d9cd-wsm2s" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-whisker--865bc4d9cd--wsm2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--b--419a632674-k8s-whisker--865bc4d9cd--wsm2s-eth0", GenerateName:"whisker-865bc4d9cd-", Namespace:"calico-system", SelfLink:"", UID:"45848e32-6c02-41b9-837d-21663011857a", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 7, 1, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"865bc4d9cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-b-419a632674", ContainerID:"329fb99fd4d234205b7ce2c54f0a150741762e48e9a0930316afadad8a2fd83a", Pod:"whisker-865bc4d9cd-wsm2s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.18.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib91c67a3f0d", MAC:"72:1a:b0:9a:c0:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 07:01:39.619480 containerd[1499]: 2025-11-24 07:01:39.608 [INFO][3757] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="329fb99fd4d234205b7ce2c54f0a150741762e48e9a0930316afadad8a2fd83a" Namespace="calico-system" Pod="whisker-865bc4d9cd-wsm2s" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-whisker--865bc4d9cd--wsm2s-eth0" Nov 24 07:01:39.760207 containerd[1499]: time="2025-11-24T07:01:39.760122664Z" level=info msg="connecting to shim 329fb99fd4d234205b7ce2c54f0a150741762e48e9a0930316afadad8a2fd83a" address="unix:///run/containerd/s/657b50a2b7eae9de84186d8149e4095dd74526d1f88301e8808c1760af5d9628" namespace=k8s.io protocol=ttrpc version=3 Nov 24 07:01:39.787214 systemd[1]: Started cri-containerd-329fb99fd4d234205b7ce2c54f0a150741762e48e9a0930316afadad8a2fd83a.scope - libcontainer container 329fb99fd4d234205b7ce2c54f0a150741762e48e9a0930316afadad8a2fd83a. Nov 24 07:01:39.800832 kubelet[2674]: I1124 07:01:39.800784 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 07:01:39.801627 kubelet[2674]: E1124 07:01:39.801595 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:39.880796 containerd[1499]: time="2025-11-24T07:01:39.880742124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-865bc4d9cd-wsm2s,Uid:45848e32-6c02-41b9-837d-21663011857a,Namespace:calico-system,Attempt:0,} returns sandbox id \"329fb99fd4d234205b7ce2c54f0a150741762e48e9a0930316afadad8a2fd83a\"" Nov 24 07:01:39.882919 containerd[1499]: time="2025-11-24T07:01:39.882624937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 07:01:40.226815 containerd[1499]: time="2025-11-24T07:01:40.225862491Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:01:40.227577 containerd[1499]: time="2025-11-24T07:01:40.227522529Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 07:01:40.227730 containerd[1499]: time="2025-11-24T07:01:40.227702252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 07:01:40.228364 kubelet[2674]: E1124 07:01:40.228049 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 07:01:40.228364 kubelet[2674]: E1124 07:01:40.228109 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 07:01:40.229618 kubelet[2674]: E1124 07:01:40.229530 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4b7086ed6818482a82815cc41f7813f5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7c6h5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-865bc4d9cd-wsm2s_calico-system(45848e32-6c02-41b9-837d-21663011857a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 07:01:40.232993 containerd[1499]: time="2025-11-24T07:01:40.232956935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 07:01:40.563648 containerd[1499]: time="2025-11-24T07:01:40.563576751Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:01:40.565472 containerd[1499]: time="2025-11-24T07:01:40.565408141Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 07:01:40.565669 containerd[1499]: time="2025-11-24T07:01:40.565528906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 07:01:40.565823 kubelet[2674]: E1124 07:01:40.565768 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 07:01:40.565993 kubelet[2674]: E1124 07:01:40.565833 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 07:01:40.566058 kubelet[2674]: E1124 07:01:40.565999 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7c6h5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-865bc4d9cd-wsm2s_calico-system(45848e32-6c02-41b9-837d-21663011857a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 07:01:40.567513 kubelet[2674]: E1124 07:01:40.567447 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-865bc4d9cd-wsm2s" podUID="45848e32-6c02-41b9-837d-21663011857a" Nov 24 07:01:40.811376 kubelet[2674]: E1124 07:01:40.811081 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-865bc4d9cd-wsm2s" podUID="45848e32-6c02-41b9-837d-21663011857a" Nov 24 07:01:40.985124 systemd-networkd[1423]: calib91c67a3f0d: Gained IPv6LL Nov 24 07:01:41.813231 kubelet[2674]: E1124 07:01:41.813172 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-865bc4d9cd-wsm2s" podUID="45848e32-6c02-41b9-837d-21663011857a" Nov 24 07:01:42.872451 kubelet[2674]: I1124 07:01:42.872373 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 07:01:42.873480 kubelet[2674]: E1124 07:01:42.873247 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:43.530221 containerd[1499]: time="2025-11-24T07:01:43.529567601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xq9xq,Uid:c34e73b2-4364-4998-a5a0-398cc36c9e15,Namespace:calico-system,Attempt:0,}" Nov 24 07:01:43.530221 containerd[1499]: time="2025-11-24T07:01:43.529626881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9b89cb9c-ljkq4,Uid:e3e5450f-313c-477b-87ac-4da097ca2eb2,Namespace:calico-system,Attempt:0,}" Nov 24 07:01:43.530221 containerd[1499]: time="2025-11-24T07:01:43.529571287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b7cb7b4d-lbdp7,Uid:8bd2d503-fa71-45be-9f52-ae92f15b3067,Namespace:calico-apiserver,Attempt:0,}" Nov 24 07:01:43.791809 systemd-networkd[1423]: cali07e00293cf2: Link UP Nov 24 07:01:43.796430 systemd-networkd[1423]: cali07e00293cf2: Gained carrier Nov 24 07:01:43.824791 containerd[1499]: 2025-11-24 07:01:43.589 [INFO][4027] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 07:01:43.824791 containerd[1499]: 2025-11-24 07:01:43.639 [INFO][4027] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--b--419a632674-k8s-calico--apiserver--66b7cb7b4d--lbdp7-eth0 calico-apiserver-66b7cb7b4d- calico-apiserver 8bd2d503-fa71-45be-9f52-ae92f15b3067 870 0 2025-11-24 07:01:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66b7cb7b4d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.1-b-419a632674 calico-apiserver-66b7cb7b4d-lbdp7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali07e00293cf2 [] [] }} ContainerID="339454d5bf317d18b42f204fd1556d3a5197e84bde4ca0901d7ab3d689aad332" Namespace="calico-apiserver" Pod="calico-apiserver-66b7cb7b4d-lbdp7" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-calico--apiserver--66b7cb7b4d--lbdp7-" Nov 24 07:01:43.824791 containerd[1499]: 2025-11-24 07:01:43.639 [INFO][4027] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="339454d5bf317d18b42f204fd1556d3a5197e84bde4ca0901d7ab3d689aad332" Namespace="calico-apiserver" Pod="calico-apiserver-66b7cb7b4d-lbdp7" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-calico--apiserver--66b7cb7b4d--lbdp7-eth0" Nov 24 07:01:43.824791 containerd[1499]: 2025-11-24 07:01:43.716 [INFO][4055] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="339454d5bf317d18b42f204fd1556d3a5197e84bde4ca0901d7ab3d689aad332" HandleID="k8s-pod-network.339454d5bf317d18b42f204fd1556d3a5197e84bde4ca0901d7ab3d689aad332" Workload="ci--4459.2.1--b--419a632674-k8s-calico--apiserver--66b7cb7b4d--lbdp7-eth0" Nov 24 07:01:43.825120 containerd[1499]: 2025-11-24 07:01:43.716 [INFO][4055] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="339454d5bf317d18b42f204fd1556d3a5197e84bde4ca0901d7ab3d689aad332" HandleID="k8s-pod-network.339454d5bf317d18b42f204fd1556d3a5197e84bde4ca0901d7ab3d689aad332" Workload="ci--4459.2.1--b--419a632674-k8s-calico--apiserver--66b7cb7b4d--lbdp7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.1-b-419a632674", "pod":"calico-apiserver-66b7cb7b4d-lbdp7", "timestamp":"2025-11-24 07:01:43.716160095 +0000 UTC"}, Hostname:"ci-4459.2.1-b-419a632674", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 07:01:43.825120 containerd[1499]: 2025-11-24 07:01:43.716 [INFO][4055] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 07:01:43.825120 containerd[1499]: 2025-11-24 07:01:43.716 [INFO][4055] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 07:01:43.825120 containerd[1499]: 2025-11-24 07:01:43.717 [INFO][4055] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-b-419a632674' Nov 24 07:01:43.825120 containerd[1499]: 2025-11-24 07:01:43.728 [INFO][4055] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.339454d5bf317d18b42f204fd1556d3a5197e84bde4ca0901d7ab3d689aad332" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:43.825120 containerd[1499]: 2025-11-24 07:01:43.735 [INFO][4055] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-b-419a632674" Nov 24 07:01:43.825120 containerd[1499]: 2025-11-24 07:01:43.747 [INFO][4055] ipam/ipam.go 511: Trying affinity for 192.168.18.192/26 host="ci-4459.2.1-b-419a632674" Nov 24 07:01:43.825120 containerd[1499]: 2025-11-24 07:01:43.754 [INFO][4055] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.192/26 host="ci-4459.2.1-b-419a632674" Nov 24 07:01:43.825120 containerd[1499]: 2025-11-24 07:01:43.758 [INFO][4055] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="ci-4459.2.1-b-419a632674" Nov 24 07:01:43.825395 containerd[1499]: 2025-11-24 07:01:43.759 [INFO][4055] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.339454d5bf317d18b42f204fd1556d3a5197e84bde4ca0901d7ab3d689aad332" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:43.825395 containerd[1499]: 2025-11-24 07:01:43.762 [INFO][4055] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.339454d5bf317d18b42f204fd1556d3a5197e84bde4ca0901d7ab3d689aad332 Nov 24 07:01:43.825395 containerd[1499]: 2025-11-24 07:01:43.769 [INFO][4055] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.339454d5bf317d18b42f204fd1556d3a5197e84bde4ca0901d7ab3d689aad332" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:43.825395 containerd[1499]: 2025-11-24 07:01:43.778 [INFO][4055] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.18.194/26] block=192.168.18.192/26 handle="k8s-pod-network.339454d5bf317d18b42f204fd1556d3a5197e84bde4ca0901d7ab3d689aad332" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:43.825395 containerd[1499]: 2025-11-24 07:01:43.778 [INFO][4055] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.194/26] handle="k8s-pod-network.339454d5bf317d18b42f204fd1556d3a5197e84bde4ca0901d7ab3d689aad332" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:43.825395 containerd[1499]: 2025-11-24 07:01:43.778 [INFO][4055] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 07:01:43.825395 containerd[1499]: 2025-11-24 07:01:43.778 [INFO][4055] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.18.194/26] IPv6=[] ContainerID="339454d5bf317d18b42f204fd1556d3a5197e84bde4ca0901d7ab3d689aad332" HandleID="k8s-pod-network.339454d5bf317d18b42f204fd1556d3a5197e84bde4ca0901d7ab3d689aad332" Workload="ci--4459.2.1--b--419a632674-k8s-calico--apiserver--66b7cb7b4d--lbdp7-eth0" Nov 24 07:01:43.826494 containerd[1499]: 2025-11-24 07:01:43.783 [INFO][4027] cni-plugin/k8s.go 418: Populated endpoint ContainerID="339454d5bf317d18b42f204fd1556d3a5197e84bde4ca0901d7ab3d689aad332" Namespace="calico-apiserver" Pod="calico-apiserver-66b7cb7b4d-lbdp7" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-calico--apiserver--66b7cb7b4d--lbdp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--b--419a632674-k8s-calico--apiserver--66b7cb7b4d--lbdp7-eth0", GenerateName:"calico-apiserver-66b7cb7b4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"8bd2d503-fa71-45be-9f52-ae92f15b3067", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 7, 1, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b7cb7b4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-b-419a632674", ContainerID:"", Pod:"calico-apiserver-66b7cb7b4d-lbdp7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07e00293cf2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 07:01:43.826881 containerd[1499]: 2025-11-24 07:01:43.784 [INFO][4027] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.194/32] ContainerID="339454d5bf317d18b42f204fd1556d3a5197e84bde4ca0901d7ab3d689aad332" Namespace="calico-apiserver" Pod="calico-apiserver-66b7cb7b4d-lbdp7" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-calico--apiserver--66b7cb7b4d--lbdp7-eth0" Nov 24 07:01:43.826881 containerd[1499]: 2025-11-24 07:01:43.784 [INFO][4027] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07e00293cf2 ContainerID="339454d5bf317d18b42f204fd1556d3a5197e84bde4ca0901d7ab3d689aad332" Namespace="calico-apiserver" Pod="calico-apiserver-66b7cb7b4d-lbdp7" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-calico--apiserver--66b7cb7b4d--lbdp7-eth0" Nov 24 07:01:43.826881 containerd[1499]: 2025-11-24 07:01:43.797 [INFO][4027] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="339454d5bf317d18b42f204fd1556d3a5197e84bde4ca0901d7ab3d689aad332" Namespace="calico-apiserver" Pod="calico-apiserver-66b7cb7b4d-lbdp7" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-calico--apiserver--66b7cb7b4d--lbdp7-eth0" Nov 24 07:01:43.827431 containerd[1499]: 2025-11-24 07:01:43.799 [INFO][4027] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="339454d5bf317d18b42f204fd1556d3a5197e84bde4ca0901d7ab3d689aad332" Namespace="calico-apiserver" Pod="calico-apiserver-66b7cb7b4d-lbdp7" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-calico--apiserver--66b7cb7b4d--lbdp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--b--419a632674-k8s-calico--apiserver--66b7cb7b4d--lbdp7-eth0", GenerateName:"calico-apiserver-66b7cb7b4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"8bd2d503-fa71-45be-9f52-ae92f15b3067", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 7, 1, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b7cb7b4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-b-419a632674", ContainerID:"339454d5bf317d18b42f204fd1556d3a5197e84bde4ca0901d7ab3d689aad332", Pod:"calico-apiserver-66b7cb7b4d-lbdp7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07e00293cf2", MAC:"26:b8:78:35:7f:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 07:01:43.827615 containerd[1499]: 2025-11-24 07:01:43.817 [INFO][4027] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="339454d5bf317d18b42f204fd1556d3a5197e84bde4ca0901d7ab3d689aad332" Namespace="calico-apiserver" Pod="calico-apiserver-66b7cb7b4d-lbdp7" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-calico--apiserver--66b7cb7b4d--lbdp7-eth0" Nov 24 07:01:43.867645 containerd[1499]: time="2025-11-24T07:01:43.867563464Z" level=info msg="connecting to shim 339454d5bf317d18b42f204fd1556d3a5197e84bde4ca0901d7ab3d689aad332" address="unix:///run/containerd/s/1e0df7d5edd2a12143daf6af498dde9cd754ffc01f904aeb970db8cf1caf8517" namespace=k8s.io protocol=ttrpc version=3 Nov 24 07:01:43.956544 systemd-networkd[1423]: calic19fbd69c70: Link UP Nov 24 07:01:43.956828 systemd-networkd[1423]: calic19fbd69c70: Gained carrier Nov 24 07:01:43.998490 systemd[1]: Started cri-containerd-339454d5bf317d18b42f204fd1556d3a5197e84bde4ca0901d7ab3d689aad332.scope - libcontainer container 339454d5bf317d18b42f204fd1556d3a5197e84bde4ca0901d7ab3d689aad332. Nov 24 07:01:44.007854 containerd[1499]: 2025-11-24 07:01:43.608 [INFO][4013] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 07:01:44.007854 containerd[1499]: 2025-11-24 07:01:43.638 [INFO][4013] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--b--419a632674-k8s-calico--kube--controllers--7f9b89cb9c--ljkq4-eth0 calico-kube-controllers-7f9b89cb9c- calico-system e3e5450f-313c-477b-87ac-4da097ca2eb2 871 0 2025-11-24 07:01:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f9b89cb9c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.2.1-b-419a632674 calico-kube-controllers-7f9b89cb9c-ljkq4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic19fbd69c70 [] [] }} ContainerID="174dfe15ea7ae748c8c4175bba454ffe0a4ba0cca451632c6526d192e796fce4" Namespace="calico-system" Pod="calico-kube-controllers-7f9b89cb9c-ljkq4" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-calico--kube--controllers--7f9b89cb9c--ljkq4-" Nov 24 07:01:44.007854 containerd[1499]: 2025-11-24 07:01:43.640 [INFO][4013] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="174dfe15ea7ae748c8c4175bba454ffe0a4ba0cca451632c6526d192e796fce4" Namespace="calico-system" Pod="calico-kube-controllers-7f9b89cb9c-ljkq4" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-calico--kube--controllers--7f9b89cb9c--ljkq4-eth0" Nov 24 07:01:44.007854 containerd[1499]: 2025-11-24 07:01:43.724 [INFO][4054] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="174dfe15ea7ae748c8c4175bba454ffe0a4ba0cca451632c6526d192e796fce4" HandleID="k8s-pod-network.174dfe15ea7ae748c8c4175bba454ffe0a4ba0cca451632c6526d192e796fce4" Workload="ci--4459.2.1--b--419a632674-k8s-calico--kube--controllers--7f9b89cb9c--ljkq4-eth0" Nov 24 07:01:44.008480 containerd[1499]: 2025-11-24 07:01:43.726 [INFO][4054] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="174dfe15ea7ae748c8c4175bba454ffe0a4ba0cca451632c6526d192e796fce4" HandleID="k8s-pod-network.174dfe15ea7ae748c8c4175bba454ffe0a4ba0cca451632c6526d192e796fce4" Workload="ci--4459.2.1--b--419a632674-k8s-calico--kube--controllers--7f9b89cb9c--ljkq4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5940), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.1-b-419a632674", "pod":"calico-kube-controllers-7f9b89cb9c-ljkq4", "timestamp":"2025-11-24 07:01:43.724397425 +0000 UTC"}, Hostname:"ci-4459.2.1-b-419a632674", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 07:01:44.008480 containerd[1499]: 2025-11-24 07:01:43.726 [INFO][4054] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 07:01:44.008480 containerd[1499]: 2025-11-24 07:01:43.778 [INFO][4054] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 07:01:44.008480 containerd[1499]: 2025-11-24 07:01:43.778 [INFO][4054] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-b-419a632674' Nov 24 07:01:44.008480 containerd[1499]: 2025-11-24 07:01:43.830 [INFO][4054] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.174dfe15ea7ae748c8c4175bba454ffe0a4ba0cca451632c6526d192e796fce4" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:44.008480 containerd[1499]: 2025-11-24 07:01:43.845 [INFO][4054] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-b-419a632674" Nov 24 07:01:44.008480 containerd[1499]: 2025-11-24 07:01:43.862 [INFO][4054] ipam/ipam.go 511: Trying affinity for 192.168.18.192/26 host="ci-4459.2.1-b-419a632674" Nov 24 07:01:44.008480 containerd[1499]: 2025-11-24 07:01:43.871 [INFO][4054] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.192/26 host="ci-4459.2.1-b-419a632674" Nov 24 07:01:44.008480 containerd[1499]: 2025-11-24 07:01:43.881 [INFO][4054] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="ci-4459.2.1-b-419a632674" Nov 24 07:01:44.008772 containerd[1499]: 2025-11-24 07:01:43.882 [INFO][4054] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.174dfe15ea7ae748c8c4175bba454ffe0a4ba0cca451632c6526d192e796fce4" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:44.008772 containerd[1499]: 2025-11-24 07:01:43.888 [INFO][4054] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.174dfe15ea7ae748c8c4175bba454ffe0a4ba0cca451632c6526d192e796fce4 Nov 24 07:01:44.008772 containerd[1499]: 2025-11-24 07:01:43.901 [INFO][4054] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.174dfe15ea7ae748c8c4175bba454ffe0a4ba0cca451632c6526d192e796fce4" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:44.008772 containerd[1499]: 2025-11-24 07:01:43.916 [INFO][4054] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.18.195/26] block=192.168.18.192/26 handle="k8s-pod-network.174dfe15ea7ae748c8c4175bba454ffe0a4ba0cca451632c6526d192e796fce4" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:44.008772 containerd[1499]: 2025-11-24 07:01:43.918 [INFO][4054] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.195/26] handle="k8s-pod-network.174dfe15ea7ae748c8c4175bba454ffe0a4ba0cca451632c6526d192e796fce4" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:44.008772 containerd[1499]: 2025-11-24 07:01:43.923 [INFO][4054] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 07:01:44.008772 containerd[1499]: 2025-11-24 07:01:43.923 [INFO][4054] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.18.195/26] IPv6=[] ContainerID="174dfe15ea7ae748c8c4175bba454ffe0a4ba0cca451632c6526d192e796fce4" HandleID="k8s-pod-network.174dfe15ea7ae748c8c4175bba454ffe0a4ba0cca451632c6526d192e796fce4" Workload="ci--4459.2.1--b--419a632674-k8s-calico--kube--controllers--7f9b89cb9c--ljkq4-eth0" Nov 24 07:01:44.011267 containerd[1499]: 2025-11-24 07:01:43.946 [INFO][4013] cni-plugin/k8s.go 418: Populated endpoint ContainerID="174dfe15ea7ae748c8c4175bba454ffe0a4ba0cca451632c6526d192e796fce4" Namespace="calico-system" Pod="calico-kube-controllers-7f9b89cb9c-ljkq4" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-calico--kube--controllers--7f9b89cb9c--ljkq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--b--419a632674-k8s-calico--kube--controllers--7f9b89cb9c--ljkq4-eth0", GenerateName:"calico-kube-controllers-7f9b89cb9c-", Namespace:"calico-system", SelfLink:"", UID:"e3e5450f-313c-477b-87ac-4da097ca2eb2", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 7, 1, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f9b89cb9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-b-419a632674", ContainerID:"", Pod:"calico-kube-controllers-7f9b89cb9c-ljkq4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic19fbd69c70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 07:01:44.011848 containerd[1499]: 2025-11-24 07:01:43.946 [INFO][4013] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.195/32] ContainerID="174dfe15ea7ae748c8c4175bba454ffe0a4ba0cca451632c6526d192e796fce4" Namespace="calico-system" Pod="calico-kube-controllers-7f9b89cb9c-ljkq4" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-calico--kube--controllers--7f9b89cb9c--ljkq4-eth0" Nov 24 07:01:44.011848 containerd[1499]: 2025-11-24 07:01:43.946 [INFO][4013] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic19fbd69c70 ContainerID="174dfe15ea7ae748c8c4175bba454ffe0a4ba0cca451632c6526d192e796fce4" Namespace="calico-system" Pod="calico-kube-controllers-7f9b89cb9c-ljkq4" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-calico--kube--controllers--7f9b89cb9c--ljkq4-eth0" Nov 24 07:01:44.011848 containerd[1499]: 2025-11-24 07:01:43.953 [INFO][4013] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="174dfe15ea7ae748c8c4175bba454ffe0a4ba0cca451632c6526d192e796fce4" Namespace="calico-system" Pod="calico-kube-controllers-7f9b89cb9c-ljkq4" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-calico--kube--controllers--7f9b89cb9c--ljkq4-eth0" Nov 24 07:01:44.012583 containerd[1499]: 2025-11-24 07:01:43.953 [INFO][4013] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="174dfe15ea7ae748c8c4175bba454ffe0a4ba0cca451632c6526d192e796fce4" Namespace="calico-system" Pod="calico-kube-controllers-7f9b89cb9c-ljkq4" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-calico--kube--controllers--7f9b89cb9c--ljkq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--b--419a632674-k8s-calico--kube--controllers--7f9b89cb9c--ljkq4-eth0", GenerateName:"calico-kube-controllers-7f9b89cb9c-", Namespace:"calico-system", SelfLink:"", UID:"e3e5450f-313c-477b-87ac-4da097ca2eb2", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 7, 1, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f9b89cb9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-b-419a632674", ContainerID:"174dfe15ea7ae748c8c4175bba454ffe0a4ba0cca451632c6526d192e796fce4", Pod:"calico-kube-controllers-7f9b89cb9c-ljkq4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic19fbd69c70", MAC:"e6:e6:71:3a:4e:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 07:01:44.012696 containerd[1499]: 2025-11-24 07:01:43.977 [INFO][4013] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="174dfe15ea7ae748c8c4175bba454ffe0a4ba0cca451632c6526d192e796fce4" Namespace="calico-system" Pod="calico-kube-controllers-7f9b89cb9c-ljkq4" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-calico--kube--controllers--7f9b89cb9c--ljkq4-eth0" Nov 24 07:01:44.087265 containerd[1499]: time="2025-11-24T07:01:44.085980233Z" level=info msg="connecting to shim 174dfe15ea7ae748c8c4175bba454ffe0a4ba0cca451632c6526d192e796fce4" address="unix:///run/containerd/s/a4ddd4758987f45df855866b840e51ed092ebf42b2fe0d34ee9f618f861b691a" namespace=k8s.io protocol=ttrpc version=3 Nov 24 07:01:44.132942 systemd-networkd[1423]: calic703eb29666: Link UP Nov 24 07:01:44.136376 systemd-networkd[1423]: calic703eb29666: Gained carrier Nov 24 07:01:44.172425 containerd[1499]: 2025-11-24 07:01:43.625 [INFO][4022] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 07:01:44.172425 containerd[1499]: 2025-11-24 07:01:43.664 [INFO][4022] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--b--419a632674-k8s-csi--node--driver--xq9xq-eth0 csi-node-driver- calico-system c34e73b2-4364-4998-a5a0-398cc36c9e15 756 0 2025-11-24 07:01:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.2.1-b-419a632674 csi-node-driver-xq9xq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic703eb29666 [] [] }} ContainerID="f9000820e55db7783d884b2c69bc9bd6eca5a8805313981c8d5231f7d1e976b3" Namespace="calico-system" Pod="csi-node-driver-xq9xq" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-csi--node--driver--xq9xq-" Nov 24 07:01:44.172425 containerd[1499]: 2025-11-24 07:01:43.665 [INFO][4022] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f9000820e55db7783d884b2c69bc9bd6eca5a8805313981c8d5231f7d1e976b3" Namespace="calico-system" Pod="csi-node-driver-xq9xq" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-csi--node--driver--xq9xq-eth0" Nov 24 07:01:44.172425 containerd[1499]: 2025-11-24 07:01:43.736 [INFO][4064] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f9000820e55db7783d884b2c69bc9bd6eca5a8805313981c8d5231f7d1e976b3" HandleID="k8s-pod-network.f9000820e55db7783d884b2c69bc9bd6eca5a8805313981c8d5231f7d1e976b3" Workload="ci--4459.2.1--b--419a632674-k8s-csi--node--driver--xq9xq-eth0" Nov 24 07:01:44.173156 containerd[1499]: 2025-11-24 07:01:43.736 [INFO][4064] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f9000820e55db7783d884b2c69bc9bd6eca5a8805313981c8d5231f7d1e976b3" HandleID="k8s-pod-network.f9000820e55db7783d884b2c69bc9bd6eca5a8805313981c8d5231f7d1e976b3" Workload="ci--4459.2.1--b--419a632674-k8s-csi--node--driver--xq9xq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd620), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.1-b-419a632674", "pod":"csi-node-driver-xq9xq", "timestamp":"2025-11-24 07:01:43.736393246 +0000 UTC"}, Hostname:"ci-4459.2.1-b-419a632674", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 07:01:44.173156 containerd[1499]: 2025-11-24 07:01:43.736 [INFO][4064] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 07:01:44.173156 containerd[1499]: 2025-11-24 07:01:43.927 [INFO][4064] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 07:01:44.173156 containerd[1499]: 2025-11-24 07:01:43.928 [INFO][4064] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-b-419a632674' Nov 24 07:01:44.173156 containerd[1499]: 2025-11-24 07:01:43.965 [INFO][4064] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f9000820e55db7783d884b2c69bc9bd6eca5a8805313981c8d5231f7d1e976b3" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:44.173156 containerd[1499]: 2025-11-24 07:01:44.008 [INFO][4064] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-b-419a632674" Nov 24 07:01:44.173156 containerd[1499]: 2025-11-24 07:01:44.023 [INFO][4064] ipam/ipam.go 511: Trying affinity for 192.168.18.192/26 host="ci-4459.2.1-b-419a632674" Nov 24 07:01:44.173156 containerd[1499]: 2025-11-24 07:01:44.029 [INFO][4064] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.192/26 host="ci-4459.2.1-b-419a632674" Nov 24 07:01:44.173156 containerd[1499]: 2025-11-24 07:01:44.037 [INFO][4064] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="ci-4459.2.1-b-419a632674" Nov 24 07:01:44.173561 containerd[1499]: 2025-11-24 07:01:44.037 [INFO][4064] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.f9000820e55db7783d884b2c69bc9bd6eca5a8805313981c8d5231f7d1e976b3" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:44.173561 containerd[1499]: 2025-11-24 07:01:44.046 [INFO][4064] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f9000820e55db7783d884b2c69bc9bd6eca5a8805313981c8d5231f7d1e976b3 Nov 24 07:01:44.173561 containerd[1499]: 2025-11-24 07:01:44.064 [INFO][4064] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.f9000820e55db7783d884b2c69bc9bd6eca5a8805313981c8d5231f7d1e976b3" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:44.173561 containerd[1499]: 2025-11-24 07:01:44.084 [INFO][4064] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.18.196/26] block=192.168.18.192/26 handle="k8s-pod-network.f9000820e55db7783d884b2c69bc9bd6eca5a8805313981c8d5231f7d1e976b3" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:44.173561 containerd[1499]: 2025-11-24 07:01:44.084 [INFO][4064] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.196/26] handle="k8s-pod-network.f9000820e55db7783d884b2c69bc9bd6eca5a8805313981c8d5231f7d1e976b3" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:44.173561 containerd[1499]: 2025-11-24 07:01:44.084 [INFO][4064] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 07:01:44.173561 containerd[1499]: 2025-11-24 07:01:44.084 [INFO][4064] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.18.196/26] IPv6=[] ContainerID="f9000820e55db7783d884b2c69bc9bd6eca5a8805313981c8d5231f7d1e976b3" HandleID="k8s-pod-network.f9000820e55db7783d884b2c69bc9bd6eca5a8805313981c8d5231f7d1e976b3" Workload="ci--4459.2.1--b--419a632674-k8s-csi--node--driver--xq9xq-eth0" Nov 24 07:01:44.173950 containerd[1499]: 2025-11-24 07:01:44.100 [INFO][4022] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f9000820e55db7783d884b2c69bc9bd6eca5a8805313981c8d5231f7d1e976b3" Namespace="calico-system" Pod="csi-node-driver-xq9xq" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-csi--node--driver--xq9xq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--b--419a632674-k8s-csi--node--driver--xq9xq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c34e73b2-4364-4998-a5a0-398cc36c9e15", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 7, 1, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-b-419a632674", ContainerID:"", Pod:"csi-node-driver-xq9xq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic703eb29666", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 07:01:44.174044 containerd[1499]: 2025-11-24 07:01:44.100 [INFO][4022] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.196/32] ContainerID="f9000820e55db7783d884b2c69bc9bd6eca5a8805313981c8d5231f7d1e976b3" Namespace="calico-system" Pod="csi-node-driver-xq9xq" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-csi--node--driver--xq9xq-eth0" Nov 24 07:01:44.174044 containerd[1499]: 2025-11-24 07:01:44.100 [INFO][4022] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic703eb29666 ContainerID="f9000820e55db7783d884b2c69bc9bd6eca5a8805313981c8d5231f7d1e976b3" Namespace="calico-system" Pod="csi-node-driver-xq9xq" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-csi--node--driver--xq9xq-eth0" Nov 24 07:01:44.174044 containerd[1499]: 2025-11-24 07:01:44.134 [INFO][4022] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f9000820e55db7783d884b2c69bc9bd6eca5a8805313981c8d5231f7d1e976b3" Namespace="calico-system" Pod="csi-node-driver-xq9xq" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-csi--node--driver--xq9xq-eth0" Nov 24 07:01:44.174450 containerd[1499]: 2025-11-24 07:01:44.134 [INFO][4022] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f9000820e55db7783d884b2c69bc9bd6eca5a8805313981c8d5231f7d1e976b3" Namespace="calico-system" Pod="csi-node-driver-xq9xq" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-csi--node--driver--xq9xq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--b--419a632674-k8s-csi--node--driver--xq9xq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c34e73b2-4364-4998-a5a0-398cc36c9e15", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 7, 1, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-b-419a632674", ContainerID:"f9000820e55db7783d884b2c69bc9bd6eca5a8805313981c8d5231f7d1e976b3", Pod:"csi-node-driver-xq9xq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic703eb29666", MAC:"3e:7a:98:86:81:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 07:01:44.174529 containerd[1499]: 2025-11-24 07:01:44.159 [INFO][4022] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f9000820e55db7783d884b2c69bc9bd6eca5a8805313981c8d5231f7d1e976b3" Namespace="calico-system" Pod="csi-node-driver-xq9xq" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-csi--node--driver--xq9xq-eth0" Nov 24 07:01:44.225190 systemd[1]: Started cri-containerd-174dfe15ea7ae748c8c4175bba454ffe0a4ba0cca451632c6526d192e796fce4.scope - libcontainer container 174dfe15ea7ae748c8c4175bba454ffe0a4ba0cca451632c6526d192e796fce4. Nov 24 07:01:44.238047 containerd[1499]: time="2025-11-24T07:01:44.237979426Z" level=info msg="connecting to shim f9000820e55db7783d884b2c69bc9bd6eca5a8805313981c8d5231f7d1e976b3" address="unix:///run/containerd/s/3b923e9bfe266e8448ab6ccb431dff5148d91c639621690dabd178117aabfab9" namespace=k8s.io protocol=ttrpc version=3 Nov 24 07:01:44.310032 systemd[1]: Started cri-containerd-f9000820e55db7783d884b2c69bc9bd6eca5a8805313981c8d5231f7d1e976b3.scope - libcontainer container f9000820e55db7783d884b2c69bc9bd6eca5a8805313981c8d5231f7d1e976b3. Nov 24 07:01:44.436839 containerd[1499]: time="2025-11-24T07:01:44.436782432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9b89cb9c-ljkq4,Uid:e3e5450f-313c-477b-87ac-4da097ca2eb2,Namespace:calico-system,Attempt:0,} returns sandbox id \"174dfe15ea7ae748c8c4175bba454ffe0a4ba0cca451632c6526d192e796fce4\"" Nov 24 07:01:44.441566 containerd[1499]: time="2025-11-24T07:01:44.441446685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b7cb7b4d-lbdp7,Uid:8bd2d503-fa71-45be-9f52-ae92f15b3067,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"339454d5bf317d18b42f204fd1556d3a5197e84bde4ca0901d7ab3d689aad332\"" Nov 24 07:01:44.444019 containerd[1499]: time="2025-11-24T07:01:44.443972695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xq9xq,Uid:c34e73b2-4364-4998-a5a0-398cc36c9e15,Namespace:calico-system,Attempt:0,} returns sandbox id \"f9000820e55db7783d884b2c69bc9bd6eca5a8805313981c8d5231f7d1e976b3\"" Nov 24 07:01:44.445817 containerd[1499]: time="2025-11-24T07:01:44.445568361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 07:01:44.774962 containerd[1499]: time="2025-11-24T07:01:44.774702588Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:01:44.775768 containerd[1499]: time="2025-11-24T07:01:44.775518199Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 07:01:44.775768 containerd[1499]: time="2025-11-24T07:01:44.775616407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 07:01:44.776080 kubelet[2674]: E1124 07:01:44.776019 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 07:01:44.777174 kubelet[2674]: E1124 07:01:44.776083 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 07:01:44.777174 kubelet[2674]: E1124 07:01:44.776336 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8z984,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7f9b89cb9c-ljkq4_calico-system(e3e5450f-313c-477b-87ac-4da097ca2eb2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 07:01:44.777852 containerd[1499]: time="2025-11-24T07:01:44.776726347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 07:01:44.777942 kubelet[2674]: E1124 07:01:44.777488 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f9b89cb9c-ljkq4" podUID="e3e5450f-313c-477b-87ac-4da097ca2eb2" Nov 24 07:01:44.828200 kubelet[2674]: E1124 07:01:44.828031 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f9b89cb9c-ljkq4" podUID="e3e5450f-313c-477b-87ac-4da097ca2eb2" Nov 24 07:01:45.112525 containerd[1499]: time="2025-11-24T07:01:45.112376160Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:01:45.113372 containerd[1499]: time="2025-11-24T07:01:45.113295418Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 07:01:45.113545 containerd[1499]: time="2025-11-24T07:01:45.113515559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 07:01:45.113910 kubelet[2674]: E1124 07:01:45.113846 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:01:45.114191 kubelet[2674]: E1124 07:01:45.114027 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:01:45.114785 containerd[1499]: time="2025-11-24T07:01:45.114438991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 07:01:45.114857 kubelet[2674]: E1124 07:01:45.114356 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cwn5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66b7cb7b4d-lbdp7_calico-apiserver(8bd2d503-fa71-45be-9f52-ae92f15b3067): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 07:01:45.116364 kubelet[2674]: E1124 07:01:45.116171 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b7cb7b4d-lbdp7" podUID="8bd2d503-fa71-45be-9f52-ae92f15b3067" Nov 24 07:01:45.209265 systemd-networkd[1423]: calic19fbd69c70: Gained IPv6LL Nov 24 07:01:45.465135 systemd-networkd[1423]: cali07e00293cf2: Gained IPv6LL Nov 24 07:01:45.470601 containerd[1499]: time="2025-11-24T07:01:45.470421120Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:01:45.471433 containerd[1499]: time="2025-11-24T07:01:45.471370754Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 07:01:45.471670 containerd[1499]: time="2025-11-24T07:01:45.471575710Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 07:01:45.472069 kubelet[2674]: E1124 07:01:45.472017 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 07:01:45.472249 kubelet[2674]: E1124 07:01:45.472082 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 07:01:45.473076 kubelet[2674]: E1124 07:01:45.473012 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfsxg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xq9xq_calico-system(c34e73b2-4364-4998-a5a0-398cc36c9e15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 07:01:45.476499 containerd[1499]: time="2025-11-24T07:01:45.476443261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 07:01:45.530431 kubelet[2674]: E1124 07:01:45.530388 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:45.532067 containerd[1499]: time="2025-11-24T07:01:45.532005020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fn526,Uid:8cbd5edf-7893-48d4-8ee9-18409fdb58f5,Namespace:kube-system,Attempt:0,}" Nov 24 07:01:45.535837 containerd[1499]: time="2025-11-24T07:01:45.535200882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b7cb7b4d-2jnsq,Uid:e713eb78-8f4a-4fad-881a-0e37cd3c7e10,Namespace:calico-apiserver,Attempt:0,}" Nov 24 07:01:45.596008 systemd-networkd[1423]: calic703eb29666: Gained IPv6LL Nov 24 07:01:45.791389 containerd[1499]: time="2025-11-24T07:01:45.791322647Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:01:45.793999 containerd[1499]: time="2025-11-24T07:01:45.793304444Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 07:01:45.794221 containerd[1499]: time="2025-11-24T07:01:45.793387052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 07:01:45.794820 kubelet[2674]: E1124 07:01:45.794465 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 07:01:45.794820 kubelet[2674]: E1124 07:01:45.794516 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 07:01:45.794820 kubelet[2674]: E1124 07:01:45.794658 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfsxg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xq9xq_calico-system(c34e73b2-4364-4998-a5a0-398cc36c9e15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 07:01:45.796653 kubelet[2674]: E1124 07:01:45.795935 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xq9xq" podUID="c34e73b2-4364-4998-a5a0-398cc36c9e15" Nov 24 07:01:45.833098 systemd-networkd[1423]: cali35ea239e260: Link UP Nov 24 07:01:45.837469 systemd-networkd[1423]: cali35ea239e260: Gained carrier Nov 24 07:01:45.856466 kubelet[2674]: E1124 07:01:45.856353 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b7cb7b4d-lbdp7" podUID="8bd2d503-fa71-45be-9f52-ae92f15b3067" Nov 24 07:01:45.859418 kubelet[2674]: E1124 07:01:45.859280 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xq9xq" podUID="c34e73b2-4364-4998-a5a0-398cc36c9e15" Nov 24 07:01:45.861524 kubelet[2674]: E1124 07:01:45.861205 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f9b89cb9c-ljkq4" podUID="e3e5450f-313c-477b-87ac-4da097ca2eb2" Nov 24 07:01:45.889190 containerd[1499]: 2025-11-24 07:01:45.618 [INFO][4266] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 07:01:45.889190 containerd[1499]: 2025-11-24 07:01:45.643 [INFO][4266] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--b--419a632674-k8s-calico--apiserver--66b7cb7b4d--2jnsq-eth0 calico-apiserver-66b7cb7b4d- calico-apiserver e713eb78-8f4a-4fad-881a-0e37cd3c7e10 872 0 2025-11-24 07:01:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66b7cb7b4d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.1-b-419a632674 calico-apiserver-66b7cb7b4d-2jnsq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali35ea239e260 [] [] }} ContainerID="898de1f157014d205ca2f373064ff955f519ab848c6ebb8816cd463c42e1985a" Namespace="calico-apiserver" Pod="calico-apiserver-66b7cb7b4d-2jnsq" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-calico--apiserver--66b7cb7b4d--2jnsq-" Nov 24 07:01:45.889190 containerd[1499]: 2025-11-24 07:01:45.643 [INFO][4266] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="898de1f157014d205ca2f373064ff955f519ab848c6ebb8816cd463c42e1985a" Namespace="calico-apiserver" Pod="calico-apiserver-66b7cb7b4d-2jnsq" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-calico--apiserver--66b7cb7b4d--2jnsq-eth0" Nov 24 07:01:45.889190 containerd[1499]: 2025-11-24 07:01:45.732 [INFO][4292] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="898de1f157014d205ca2f373064ff955f519ab848c6ebb8816cd463c42e1985a" HandleID="k8s-pod-network.898de1f157014d205ca2f373064ff955f519ab848c6ebb8816cd463c42e1985a" Workload="ci--4459.2.1--b--419a632674-k8s-calico--apiserver--66b7cb7b4d--2jnsq-eth0" Nov 24 07:01:45.889538 containerd[1499]: 2025-11-24 07:01:45.733 [INFO][4292] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="898de1f157014d205ca2f373064ff955f519ab848c6ebb8816cd463c42e1985a" HandleID="k8s-pod-network.898de1f157014d205ca2f373064ff955f519ab848c6ebb8816cd463c42e1985a" Workload="ci--4459.2.1--b--419a632674-k8s-calico--apiserver--66b7cb7b4d--2jnsq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003918a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.1-b-419a632674", "pod":"calico-apiserver-66b7cb7b4d-2jnsq", "timestamp":"2025-11-24 07:01:45.732473086 +0000 UTC"}, Hostname:"ci-4459.2.1-b-419a632674", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 07:01:45.889538 containerd[1499]: 2025-11-24 07:01:45.733 [INFO][4292] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 07:01:45.889538 containerd[1499]: 2025-11-24 07:01:45.733 [INFO][4292] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 07:01:45.889538 containerd[1499]: 2025-11-24 07:01:45.733 [INFO][4292] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-b-419a632674' Nov 24 07:01:45.889538 containerd[1499]: 2025-11-24 07:01:45.745 [INFO][4292] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.898de1f157014d205ca2f373064ff955f519ab848c6ebb8816cd463c42e1985a" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:45.889538 containerd[1499]: 2025-11-24 07:01:45.756 [INFO][4292] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-b-419a632674" Nov 24 07:01:45.889538 containerd[1499]: 2025-11-24 07:01:45.771 [INFO][4292] ipam/ipam.go 511: Trying affinity for 192.168.18.192/26 host="ci-4459.2.1-b-419a632674" Nov 24 07:01:45.889538 containerd[1499]: 2025-11-24 07:01:45.776 [INFO][4292] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.192/26 host="ci-4459.2.1-b-419a632674" Nov 24 07:01:45.889538 containerd[1499]: 2025-11-24 07:01:45.783 [INFO][4292] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="ci-4459.2.1-b-419a632674" Nov 24 07:01:45.889815 containerd[1499]: 2025-11-24 07:01:45.784 [INFO][4292] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.898de1f157014d205ca2f373064ff955f519ab848c6ebb8816cd463c42e1985a" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:45.889815 containerd[1499]: 2025-11-24 07:01:45.787 [INFO][4292] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.898de1f157014d205ca2f373064ff955f519ab848c6ebb8816cd463c42e1985a Nov 24 07:01:45.889815 containerd[1499]: 2025-11-24 07:01:45.797 [INFO][4292] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.898de1f157014d205ca2f373064ff955f519ab848c6ebb8816cd463c42e1985a" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:45.889815 containerd[1499]: 2025-11-24 07:01:45.809 [INFO][4292] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.18.197/26] block=192.168.18.192/26 handle="k8s-pod-network.898de1f157014d205ca2f373064ff955f519ab848c6ebb8816cd463c42e1985a" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:45.889815 containerd[1499]: 2025-11-24 07:01:45.809 [INFO][4292] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.197/26] handle="k8s-pod-network.898de1f157014d205ca2f373064ff955f519ab848c6ebb8816cd463c42e1985a" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:45.889815 containerd[1499]: 2025-11-24 07:01:45.809 [INFO][4292] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 07:01:45.889815 containerd[1499]: 2025-11-24 07:01:45.809 [INFO][4292] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.18.197/26] IPv6=[] ContainerID="898de1f157014d205ca2f373064ff955f519ab848c6ebb8816cd463c42e1985a" HandleID="k8s-pod-network.898de1f157014d205ca2f373064ff955f519ab848c6ebb8816cd463c42e1985a" Workload="ci--4459.2.1--b--419a632674-k8s-calico--apiserver--66b7cb7b4d--2jnsq-eth0" Nov 24 07:01:45.891907 containerd[1499]: 2025-11-24 07:01:45.816 [INFO][4266] cni-plugin/k8s.go 418: Populated endpoint ContainerID="898de1f157014d205ca2f373064ff955f519ab848c6ebb8816cd463c42e1985a" Namespace="calico-apiserver" Pod="calico-apiserver-66b7cb7b4d-2jnsq" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-calico--apiserver--66b7cb7b4d--2jnsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--b--419a632674-k8s-calico--apiserver--66b7cb7b4d--2jnsq-eth0", GenerateName:"calico-apiserver-66b7cb7b4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"e713eb78-8f4a-4fad-881a-0e37cd3c7e10", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 7, 1, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b7cb7b4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-b-419a632674", ContainerID:"", Pod:"calico-apiserver-66b7cb7b4d-2jnsq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali35ea239e260", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 07:01:45.892342 containerd[1499]: 2025-11-24 07:01:45.817 [INFO][4266] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.197/32] ContainerID="898de1f157014d205ca2f373064ff955f519ab848c6ebb8816cd463c42e1985a" Namespace="calico-apiserver" Pod="calico-apiserver-66b7cb7b4d-2jnsq" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-calico--apiserver--66b7cb7b4d--2jnsq-eth0" Nov 24 07:01:45.892342 containerd[1499]: 2025-11-24 07:01:45.818 [INFO][4266] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35ea239e260 ContainerID="898de1f157014d205ca2f373064ff955f519ab848c6ebb8816cd463c42e1985a" Namespace="calico-apiserver" Pod="calico-apiserver-66b7cb7b4d-2jnsq" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-calico--apiserver--66b7cb7b4d--2jnsq-eth0" Nov 24 07:01:45.892342 containerd[1499]: 2025-11-24 07:01:45.841 [INFO][4266] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="898de1f157014d205ca2f373064ff955f519ab848c6ebb8816cd463c42e1985a" Namespace="calico-apiserver" Pod="calico-apiserver-66b7cb7b4d-2jnsq" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-calico--apiserver--66b7cb7b4d--2jnsq-eth0" Nov 24 07:01:45.892577 containerd[1499]: 2025-11-24 07:01:45.850 [INFO][4266] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="898de1f157014d205ca2f373064ff955f519ab848c6ebb8816cd463c42e1985a" Namespace="calico-apiserver" Pod="calico-apiserver-66b7cb7b4d-2jnsq" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-calico--apiserver--66b7cb7b4d--2jnsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--b--419a632674-k8s-calico--apiserver--66b7cb7b4d--2jnsq-eth0", GenerateName:"calico-apiserver-66b7cb7b4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"e713eb78-8f4a-4fad-881a-0e37cd3c7e10", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 7, 1, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b7cb7b4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-b-419a632674", ContainerID:"898de1f157014d205ca2f373064ff955f519ab848c6ebb8816cd463c42e1985a", Pod:"calico-apiserver-66b7cb7b4d-2jnsq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali35ea239e260", MAC:"86:8e:64:37:46:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 07:01:45.892676 containerd[1499]: 2025-11-24 07:01:45.881 [INFO][4266] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="898de1f157014d205ca2f373064ff955f519ab848c6ebb8816cd463c42e1985a" Namespace="calico-apiserver" Pod="calico-apiserver-66b7cb7b4d-2jnsq" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-calico--apiserver--66b7cb7b4d--2jnsq-eth0" Nov 24 07:01:45.945084 containerd[1499]: time="2025-11-24T07:01:45.944719974Z" level=info msg="connecting to shim 898de1f157014d205ca2f373064ff955f519ab848c6ebb8816cd463c42e1985a" address="unix:///run/containerd/s/bfcc9b247f826b7b4b4c301330aaa6206c5bb2fb189557c4a4a90011feccd232" namespace=k8s.io protocol=ttrpc version=3 Nov 24 07:01:46.013797 systemd[1]: Started cri-containerd-898de1f157014d205ca2f373064ff955f519ab848c6ebb8816cd463c42e1985a.scope - libcontainer container 898de1f157014d205ca2f373064ff955f519ab848c6ebb8816cd463c42e1985a. Nov 24 07:01:46.055248 systemd-networkd[1423]: cali0e183e6bec3: Link UP Nov 24 07:01:46.061198 systemd-networkd[1423]: cali0e183e6bec3: Gained carrier Nov 24 07:01:46.101693 containerd[1499]: 2025-11-24 07:01:45.636 [INFO][4268] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 07:01:46.101693 containerd[1499]: 2025-11-24 07:01:45.663 [INFO][4268] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--b--419a632674-k8s-coredns--668d6bf9bc--fn526-eth0 coredns-668d6bf9bc- kube-system 8cbd5edf-7893-48d4-8ee9-18409fdb58f5 873 0 2025-11-24 07:01:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.1-b-419a632674 coredns-668d6bf9bc-fn526 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0e183e6bec3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c07d62a2e0692a5478db03f5d5ca4190b724d082a8392840c722cf30215bfa53" Namespace="kube-system" Pod="coredns-668d6bf9bc-fn526" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-coredns--668d6bf9bc--fn526-" Nov 24 07:01:46.101693 containerd[1499]: 2025-11-24 07:01:45.663 [INFO][4268] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c07d62a2e0692a5478db03f5d5ca4190b724d082a8392840c722cf30215bfa53" Namespace="kube-system" Pod="coredns-668d6bf9bc-fn526" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-coredns--668d6bf9bc--fn526-eth0" Nov 24 07:01:46.101693 containerd[1499]: 2025-11-24 07:01:45.746 [INFO][4297] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c07d62a2e0692a5478db03f5d5ca4190b724d082a8392840c722cf30215bfa53" HandleID="k8s-pod-network.c07d62a2e0692a5478db03f5d5ca4190b724d082a8392840c722cf30215bfa53" Workload="ci--4459.2.1--b--419a632674-k8s-coredns--668d6bf9bc--fn526-eth0" Nov 24 07:01:46.103174 containerd[1499]: 2025-11-24 07:01:45.746 [INFO][4297] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c07d62a2e0692a5478db03f5d5ca4190b724d082a8392840c722cf30215bfa53" HandleID="k8s-pod-network.c07d62a2e0692a5478db03f5d5ca4190b724d082a8392840c722cf30215bfa53" Workload="ci--4459.2.1--b--419a632674-k8s-coredns--668d6bf9bc--fn526-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103a60), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.1-b-419a632674", "pod":"coredns-668d6bf9bc-fn526", "timestamp":"2025-11-24 07:01:45.746035381 +0000 UTC"}, Hostname:"ci-4459.2.1-b-419a632674", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 07:01:46.103174 containerd[1499]: 2025-11-24 07:01:45.747 [INFO][4297] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 07:01:46.103174 containerd[1499]: 2025-11-24 07:01:45.809 [INFO][4297] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 07:01:46.103174 containerd[1499]: 2025-11-24 07:01:45.809 [INFO][4297] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-b-419a632674' Nov 24 07:01:46.103174 containerd[1499]: 2025-11-24 07:01:45.852 [INFO][4297] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c07d62a2e0692a5478db03f5d5ca4190b724d082a8392840c722cf30215bfa53" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:46.103174 containerd[1499]: 2025-11-24 07:01:45.956 [INFO][4297] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-b-419a632674" Nov 24 07:01:46.103174 containerd[1499]: 2025-11-24 07:01:45.983 [INFO][4297] ipam/ipam.go 511: Trying affinity for 192.168.18.192/26 host="ci-4459.2.1-b-419a632674" Nov 24 07:01:46.103174 containerd[1499]: 2025-11-24 07:01:45.990 [INFO][4297] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.192/26 host="ci-4459.2.1-b-419a632674" Nov 24 07:01:46.103174 containerd[1499]: 2025-11-24 07:01:46.002 [INFO][4297] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="ci-4459.2.1-b-419a632674" Nov 24 07:01:46.103604 containerd[1499]: 2025-11-24 07:01:46.002 [INFO][4297] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.c07d62a2e0692a5478db03f5d5ca4190b724d082a8392840c722cf30215bfa53" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:46.103604 containerd[1499]: 2025-11-24 07:01:46.016 [INFO][4297] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c07d62a2e0692a5478db03f5d5ca4190b724d082a8392840c722cf30215bfa53 Nov 24 07:01:46.103604 containerd[1499]: 2025-11-24 07:01:46.026 [INFO][4297] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.c07d62a2e0692a5478db03f5d5ca4190b724d082a8392840c722cf30215bfa53" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:46.103604 containerd[1499]: 2025-11-24 07:01:46.039 [INFO][4297] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.18.198/26] block=192.168.18.192/26 handle="k8s-pod-network.c07d62a2e0692a5478db03f5d5ca4190b724d082a8392840c722cf30215bfa53" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:46.103604 containerd[1499]: 2025-11-24 07:01:46.039 [INFO][4297] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.198/26] handle="k8s-pod-network.c07d62a2e0692a5478db03f5d5ca4190b724d082a8392840c722cf30215bfa53" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:46.103604 containerd[1499]: 2025-11-24 07:01:46.039 [INFO][4297] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 07:01:46.103604 containerd[1499]: 2025-11-24 07:01:46.039 [INFO][4297] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.18.198/26] IPv6=[] ContainerID="c07d62a2e0692a5478db03f5d5ca4190b724d082a8392840c722cf30215bfa53" HandleID="k8s-pod-network.c07d62a2e0692a5478db03f5d5ca4190b724d082a8392840c722cf30215bfa53" Workload="ci--4459.2.1--b--419a632674-k8s-coredns--668d6bf9bc--fn526-eth0" Nov 24 07:01:46.105120 containerd[1499]: 2025-11-24 07:01:46.046 [INFO][4268] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c07d62a2e0692a5478db03f5d5ca4190b724d082a8392840c722cf30215bfa53" Namespace="kube-system" Pod="coredns-668d6bf9bc-fn526" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-coredns--668d6bf9bc--fn526-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--b--419a632674-k8s-coredns--668d6bf9bc--fn526-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8cbd5edf-7893-48d4-8ee9-18409fdb58f5", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 7, 1, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-b-419a632674", ContainerID:"", Pod:"coredns-668d6bf9bc-fn526", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e183e6bec3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 07:01:46.105120 containerd[1499]: 2025-11-24 07:01:46.047 [INFO][4268] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.198/32] ContainerID="c07d62a2e0692a5478db03f5d5ca4190b724d082a8392840c722cf30215bfa53" Namespace="kube-system" Pod="coredns-668d6bf9bc-fn526" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-coredns--668d6bf9bc--fn526-eth0" Nov 24 07:01:46.105120 containerd[1499]: 2025-11-24 07:01:46.047 [INFO][4268] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e183e6bec3 ContainerID="c07d62a2e0692a5478db03f5d5ca4190b724d082a8392840c722cf30215bfa53" Namespace="kube-system" Pod="coredns-668d6bf9bc-fn526" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-coredns--668d6bf9bc--fn526-eth0" Nov 24 07:01:46.105120 containerd[1499]: 2025-11-24 07:01:46.065 [INFO][4268] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c07d62a2e0692a5478db03f5d5ca4190b724d082a8392840c722cf30215bfa53" Namespace="kube-system" Pod="coredns-668d6bf9bc-fn526" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-coredns--668d6bf9bc--fn526-eth0" Nov 24 07:01:46.105120 containerd[1499]: 2025-11-24 07:01:46.069 [INFO][4268] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c07d62a2e0692a5478db03f5d5ca4190b724d082a8392840c722cf30215bfa53" Namespace="kube-system" Pod="coredns-668d6bf9bc-fn526" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-coredns--668d6bf9bc--fn526-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--b--419a632674-k8s-coredns--668d6bf9bc--fn526-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8cbd5edf-7893-48d4-8ee9-18409fdb58f5", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 7, 1, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-b-419a632674", ContainerID:"c07d62a2e0692a5478db03f5d5ca4190b724d082a8392840c722cf30215bfa53", Pod:"coredns-668d6bf9bc-fn526", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e183e6bec3", MAC:"02:3b:06:14:2f:34", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 07:01:46.105120 containerd[1499]: 2025-11-24 07:01:46.092 [INFO][4268] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c07d62a2e0692a5478db03f5d5ca4190b724d082a8392840c722cf30215bfa53" Namespace="kube-system" Pod="coredns-668d6bf9bc-fn526" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-coredns--668d6bf9bc--fn526-eth0" Nov 24 07:01:46.150473 containerd[1499]: time="2025-11-24T07:01:46.150014810Z" level=info msg="connecting to shim c07d62a2e0692a5478db03f5d5ca4190b724d082a8392840c722cf30215bfa53" address="unix:///run/containerd/s/5cdef30e91b2526aa3283bc5bf88efaec3162d60b99c55875f0da824a96794d7" namespace=k8s.io protocol=ttrpc version=3 Nov 24 07:01:46.198844 systemd[1]: Started cri-containerd-c07d62a2e0692a5478db03f5d5ca4190b724d082a8392840c722cf30215bfa53.scope - libcontainer container c07d62a2e0692a5478db03f5d5ca4190b724d082a8392840c722cf30215bfa53. Nov 24 07:01:46.252799 containerd[1499]: time="2025-11-24T07:01:46.252685391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b7cb7b4d-2jnsq,Uid:e713eb78-8f4a-4fad-881a-0e37cd3c7e10,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"898de1f157014d205ca2f373064ff955f519ab848c6ebb8816cd463c42e1985a\"" Nov 24 07:01:46.262319 containerd[1499]: time="2025-11-24T07:01:46.261983576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 07:01:46.334109 containerd[1499]: time="2025-11-24T07:01:46.333962558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fn526,Uid:8cbd5edf-7893-48d4-8ee9-18409fdb58f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"c07d62a2e0692a5478db03f5d5ca4190b724d082a8392840c722cf30215bfa53\"" Nov 24 07:01:46.336671 kubelet[2674]: E1124 07:01:46.336631 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:46.342253 containerd[1499]: time="2025-11-24T07:01:46.342083078Z" level=info msg="CreateContainer within sandbox \"c07d62a2e0692a5478db03f5d5ca4190b724d082a8392840c722cf30215bfa53\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 07:01:46.371548 containerd[1499]: time="2025-11-24T07:01:46.371438521Z" level=info msg="Container 181dedd522526e4120f4dec53f995cffc52bd29012c94d25975aa20aca0e364e: CDI devices from CRI Config.CDIDevices: []" Nov 24 07:01:46.379683 containerd[1499]: time="2025-11-24T07:01:46.379594618Z" level=info msg="CreateContainer within sandbox \"c07d62a2e0692a5478db03f5d5ca4190b724d082a8392840c722cf30215bfa53\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"181dedd522526e4120f4dec53f995cffc52bd29012c94d25975aa20aca0e364e\"" Nov 24 07:01:46.382444 containerd[1499]: time="2025-11-24T07:01:46.382394790Z" level=info msg="StartContainer for \"181dedd522526e4120f4dec53f995cffc52bd29012c94d25975aa20aca0e364e\"" Nov 24 07:01:46.384238 containerd[1499]: time="2025-11-24T07:01:46.384181253Z" level=info msg="connecting to shim 181dedd522526e4120f4dec53f995cffc52bd29012c94d25975aa20aca0e364e" address="unix:///run/containerd/s/5cdef30e91b2526aa3283bc5bf88efaec3162d60b99c55875f0da824a96794d7" protocol=ttrpc version=3 Nov 24 07:01:46.421388 systemd[1]: Started cri-containerd-181dedd522526e4120f4dec53f995cffc52bd29012c94d25975aa20aca0e364e.scope - libcontainer container 181dedd522526e4120f4dec53f995cffc52bd29012c94d25975aa20aca0e364e. Nov 24 07:01:46.532252 kubelet[2674]: E1124 07:01:46.532203 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:46.532914 containerd[1499]: time="2025-11-24T07:01:46.532199455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9pgxc,Uid:246e1c4c-d135-4d73-8092-61385bbba6cb,Namespace:calico-system,Attempt:0,}" Nov 24 07:01:46.533030 containerd[1499]: time="2025-11-24T07:01:46.532739433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qjngn,Uid:c402f308-eb4f-4016-b18f-2c146b8746b7,Namespace:kube-system,Attempt:0,}" Nov 24 07:01:46.558967 containerd[1499]: time="2025-11-24T07:01:46.555083892Z" level=info msg="StartContainer for \"181dedd522526e4120f4dec53f995cffc52bd29012c94d25975aa20aca0e364e\" returns successfully" Nov 24 07:01:46.628695 containerd[1499]: time="2025-11-24T07:01:46.628162312Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:01:46.631642 containerd[1499]: time="2025-11-24T07:01:46.631128800Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 07:01:46.632607 containerd[1499]: time="2025-11-24T07:01:46.632003336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 07:01:46.632999 kubelet[2674]: E1124 07:01:46.632088 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:01:46.632999 kubelet[2674]: E1124 07:01:46.632173 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:01:46.632999 kubelet[2674]: E1124 07:01:46.632420 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9cfnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66b7cb7b4d-2jnsq_calico-apiserver(e713eb78-8f4a-4fad-881a-0e37cd3c7e10): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 07:01:46.635050 kubelet[2674]: E1124 07:01:46.634963 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b7cb7b4d-2jnsq" podUID="e713eb78-8f4a-4fad-881a-0e37cd3c7e10" Nov 24 07:01:46.855975 kubelet[2674]: E1124 07:01:46.855907 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b7cb7b4d-2jnsq" podUID="e713eb78-8f4a-4fad-881a-0e37cd3c7e10" Nov 24 07:01:46.863429 kubelet[2674]: E1124 07:01:46.863172 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:46.914603 systemd-networkd[1423]: calia121cab4200: Link UP Nov 24 07:01:46.920243 systemd-networkd[1423]: calia121cab4200: Gained carrier Nov 24 07:01:46.951001 kubelet[2674]: I1124 07:01:46.950452 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fn526" podStartSLOduration=43.950432003 podStartE2EDuration="43.950432003s" podCreationTimestamp="2025-11-24 07:01:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 07:01:46.950331159 +0000 UTC m=+47.614496005" watchObservedRunningTime="2025-11-24 07:01:46.950432003 +0000 UTC m=+47.614596847" Nov 24 07:01:46.951574 containerd[1499]: 2025-11-24 07:01:46.709 [INFO][4452] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 07:01:46.951574 containerd[1499]: 2025-11-24 07:01:46.728 [INFO][4452] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--b--419a632674-k8s-coredns--668d6bf9bc--qjngn-eth0 coredns-668d6bf9bc- kube-system c402f308-eb4f-4016-b18f-2c146b8746b7 863 0 2025-11-24 07:01:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.1-b-419a632674 coredns-668d6bf9bc-qjngn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia121cab4200 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3ed15162805c66f2f769ea58fb0a5b1535941612b7e6857e81ade9bf3ae1788d" Namespace="kube-system" Pod="coredns-668d6bf9bc-qjngn" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-coredns--668d6bf9bc--qjngn-" Nov 24 07:01:46.951574 containerd[1499]: 2025-11-24 07:01:46.728 [INFO][4452] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ed15162805c66f2f769ea58fb0a5b1535941612b7e6857e81ade9bf3ae1788d" Namespace="kube-system" Pod="coredns-668d6bf9bc-qjngn" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-coredns--668d6bf9bc--qjngn-eth0" Nov 24 07:01:46.951574 containerd[1499]: 2025-11-24 07:01:46.780 [INFO][4474] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ed15162805c66f2f769ea58fb0a5b1535941612b7e6857e81ade9bf3ae1788d" HandleID="k8s-pod-network.3ed15162805c66f2f769ea58fb0a5b1535941612b7e6857e81ade9bf3ae1788d" Workload="ci--4459.2.1--b--419a632674-k8s-coredns--668d6bf9bc--qjngn-eth0" Nov 24 07:01:46.951574 containerd[1499]: 2025-11-24 07:01:46.781 [INFO][4474] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3ed15162805c66f2f769ea58fb0a5b1535941612b7e6857e81ade9bf3ae1788d" HandleID="k8s-pod-network.3ed15162805c66f2f769ea58fb0a5b1535941612b7e6857e81ade9bf3ae1788d" Workload="ci--4459.2.1--b--419a632674-k8s-coredns--668d6bf9bc--qjngn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.1-b-419a632674", "pod":"coredns-668d6bf9bc-qjngn", "timestamp":"2025-11-24 07:01:46.780734948 +0000 UTC"}, Hostname:"ci-4459.2.1-b-419a632674", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 07:01:46.951574 containerd[1499]: 2025-11-24 07:01:46.781 [INFO][4474] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 07:01:46.951574 containerd[1499]: 2025-11-24 07:01:46.782 [INFO][4474] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 07:01:46.951574 containerd[1499]: 2025-11-24 07:01:46.782 [INFO][4474] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-b-419a632674' Nov 24 07:01:46.951574 containerd[1499]: 2025-11-24 07:01:46.801 [INFO][4474] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3ed15162805c66f2f769ea58fb0a5b1535941612b7e6857e81ade9bf3ae1788d" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:46.951574 containerd[1499]: 2025-11-24 07:01:46.816 [INFO][4474] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-b-419a632674" Nov 24 07:01:46.951574 containerd[1499]: 2025-11-24 07:01:46.830 [INFO][4474] ipam/ipam.go 511: Trying affinity for 192.168.18.192/26 host="ci-4459.2.1-b-419a632674" Nov 24 07:01:46.951574 containerd[1499]: 2025-11-24 07:01:46.835 [INFO][4474] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.192/26 host="ci-4459.2.1-b-419a632674" Nov 24 07:01:46.951574 containerd[1499]: 2025-11-24 07:01:46.842 [INFO][4474] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="ci-4459.2.1-b-419a632674" Nov 24 07:01:46.951574 containerd[1499]: 2025-11-24 07:01:46.843 [INFO][4474] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.3ed15162805c66f2f769ea58fb0a5b1535941612b7e6857e81ade9bf3ae1788d" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:46.951574 containerd[1499]: 2025-11-24 07:01:46.847 [INFO][4474] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3ed15162805c66f2f769ea58fb0a5b1535941612b7e6857e81ade9bf3ae1788d Nov 24 07:01:46.951574 containerd[1499]: 2025-11-24 07:01:46.867 [INFO][4474] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.3ed15162805c66f2f769ea58fb0a5b1535941612b7e6857e81ade9bf3ae1788d" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:46.951574 containerd[1499]: 2025-11-24 07:01:46.884 [INFO][4474] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.18.199/26] block=192.168.18.192/26 handle="k8s-pod-network.3ed15162805c66f2f769ea58fb0a5b1535941612b7e6857e81ade9bf3ae1788d" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:46.951574 containerd[1499]: 2025-11-24 07:01:46.885 [INFO][4474] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.199/26] handle="k8s-pod-network.3ed15162805c66f2f769ea58fb0a5b1535941612b7e6857e81ade9bf3ae1788d" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:46.951574 containerd[1499]: 2025-11-24 07:01:46.885 [INFO][4474] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 07:01:46.951574 containerd[1499]: 2025-11-24 07:01:46.887 [INFO][4474] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.18.199/26] IPv6=[] ContainerID="3ed15162805c66f2f769ea58fb0a5b1535941612b7e6857e81ade9bf3ae1788d" HandleID="k8s-pod-network.3ed15162805c66f2f769ea58fb0a5b1535941612b7e6857e81ade9bf3ae1788d" Workload="ci--4459.2.1--b--419a632674-k8s-coredns--668d6bf9bc--qjngn-eth0" Nov 24 07:01:46.954020 containerd[1499]: 2025-11-24 07:01:46.893 [INFO][4452] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ed15162805c66f2f769ea58fb0a5b1535941612b7e6857e81ade9bf3ae1788d" Namespace="kube-system" Pod="coredns-668d6bf9bc-qjngn" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-coredns--668d6bf9bc--qjngn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--b--419a632674-k8s-coredns--668d6bf9bc--qjngn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c402f308-eb4f-4016-b18f-2c146b8746b7", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 7, 1, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-b-419a632674", ContainerID:"", Pod:"coredns-668d6bf9bc-qjngn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia121cab4200", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 07:01:46.954020 containerd[1499]: 2025-11-24 07:01:46.894 [INFO][4452] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.199/32] ContainerID="3ed15162805c66f2f769ea58fb0a5b1535941612b7e6857e81ade9bf3ae1788d" Namespace="kube-system" Pod="coredns-668d6bf9bc-qjngn" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-coredns--668d6bf9bc--qjngn-eth0" Nov 24 07:01:46.954020 containerd[1499]: 2025-11-24 07:01:46.895 [INFO][4452] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia121cab4200 ContainerID="3ed15162805c66f2f769ea58fb0a5b1535941612b7e6857e81ade9bf3ae1788d" Namespace="kube-system" Pod="coredns-668d6bf9bc-qjngn" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-coredns--668d6bf9bc--qjngn-eth0" Nov 24 07:01:46.954020 containerd[1499]: 2025-11-24 07:01:46.921 [INFO][4452] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ed15162805c66f2f769ea58fb0a5b1535941612b7e6857e81ade9bf3ae1788d" Namespace="kube-system" Pod="coredns-668d6bf9bc-qjngn" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-coredns--668d6bf9bc--qjngn-eth0" Nov 24 07:01:46.954020 containerd[1499]: 2025-11-24 07:01:46.922 [INFO][4452] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ed15162805c66f2f769ea58fb0a5b1535941612b7e6857e81ade9bf3ae1788d" Namespace="kube-system" Pod="coredns-668d6bf9bc-qjngn" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-coredns--668d6bf9bc--qjngn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--b--419a632674-k8s-coredns--668d6bf9bc--qjngn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c402f308-eb4f-4016-b18f-2c146b8746b7", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 7, 1, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-b-419a632674", ContainerID:"3ed15162805c66f2f769ea58fb0a5b1535941612b7e6857e81ade9bf3ae1788d", Pod:"coredns-668d6bf9bc-qjngn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia121cab4200", MAC:"fa:b3:04:59:22:1c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 07:01:46.954020 containerd[1499]: 2025-11-24 07:01:46.945 [INFO][4452] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ed15162805c66f2f769ea58fb0a5b1535941612b7e6857e81ade9bf3ae1788d" Namespace="kube-system" Pod="coredns-668d6bf9bc-qjngn" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-coredns--668d6bf9bc--qjngn-eth0" Nov 24 07:01:47.012151 containerd[1499]: time="2025-11-24T07:01:47.012089162Z" level=info msg="connecting to shim 3ed15162805c66f2f769ea58fb0a5b1535941612b7e6857e81ade9bf3ae1788d" address="unix:///run/containerd/s/6dc5b3c3ee4dff98e378a1fb0fbb476f33384740e62b43634429e91458f65007" namespace=k8s.io protocol=ttrpc version=3 Nov 24 07:01:47.065107 systemd-networkd[1423]: cali35ea239e260: Gained IPv6LL Nov 24 07:01:47.082192 systemd[1]: Started cri-containerd-3ed15162805c66f2f769ea58fb0a5b1535941612b7e6857e81ade9bf3ae1788d.scope - libcontainer container 3ed15162805c66f2f769ea58fb0a5b1535941612b7e6857e81ade9bf3ae1788d. Nov 24 07:01:47.212843 systemd-networkd[1423]: cali5c897de805b: Link UP Nov 24 07:01:47.214714 systemd-networkd[1423]: cali5c897de805b: Gained carrier Nov 24 07:01:47.247175 containerd[1499]: time="2025-11-24T07:01:47.246798569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qjngn,Uid:c402f308-eb4f-4016-b18f-2c146b8746b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ed15162805c66f2f769ea58fb0a5b1535941612b7e6857e81ade9bf3ae1788d\"" Nov 24 07:01:47.250691 kubelet[2674]: E1124 07:01:47.250648 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:47.260172 containerd[1499]: time="2025-11-24T07:01:47.260109035Z" level=info msg="CreateContainer within sandbox \"3ed15162805c66f2f769ea58fb0a5b1535941612b7e6857e81ade9bf3ae1788d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 07:01:47.277737 containerd[1499]: 2025-11-24 07:01:46.688 [INFO][4443] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 07:01:47.277737 containerd[1499]: 2025-11-24 07:01:46.726 [INFO][4443] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--b--419a632674-k8s-goldmane--666569f655--9pgxc-eth0 goldmane-666569f655- calico-system 246e1c4c-d135-4d73-8092-61385bbba6cb 869 0 2025-11-24 07:01:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.2.1-b-419a632674 goldmane-666569f655-9pgxc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5c897de805b [] [] }} ContainerID="4d3761006ef7ec79ce66e4bd94944cb7c8b01ba82ec80c608694bcfb2f462bb8" Namespace="calico-system" Pod="goldmane-666569f655-9pgxc" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-goldmane--666569f655--9pgxc-" Nov 24 07:01:47.277737 containerd[1499]: 2025-11-24 07:01:46.726 [INFO][4443] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4d3761006ef7ec79ce66e4bd94944cb7c8b01ba82ec80c608694bcfb2f462bb8" Namespace="calico-system" Pod="goldmane-666569f655-9pgxc" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-goldmane--666569f655--9pgxc-eth0" Nov 24 07:01:47.277737 containerd[1499]: 2025-11-24 07:01:46.830 [INFO][4472] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d3761006ef7ec79ce66e4bd94944cb7c8b01ba82ec80c608694bcfb2f462bb8" HandleID="k8s-pod-network.4d3761006ef7ec79ce66e4bd94944cb7c8b01ba82ec80c608694bcfb2f462bb8" Workload="ci--4459.2.1--b--419a632674-k8s-goldmane--666569f655--9pgxc-eth0" Nov 24 07:01:47.277737 containerd[1499]: 2025-11-24 07:01:46.830 [INFO][4472] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4d3761006ef7ec79ce66e4bd94944cb7c8b01ba82ec80c608694bcfb2f462bb8" HandleID="k8s-pod-network.4d3761006ef7ec79ce66e4bd94944cb7c8b01ba82ec80c608694bcfb2f462bb8" Workload="ci--4459.2.1--b--419a632674-k8s-goldmane--666569f655--9pgxc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e7e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.1-b-419a632674", "pod":"goldmane-666569f655-9pgxc", "timestamp":"2025-11-24 07:01:46.830106607 +0000 UTC"}, Hostname:"ci-4459.2.1-b-419a632674", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 07:01:47.277737 containerd[1499]: 2025-11-24 07:01:46.830 [INFO][4472] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 07:01:47.277737 containerd[1499]: 2025-11-24 07:01:46.885 [INFO][4472] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 07:01:47.277737 containerd[1499]: 2025-11-24 07:01:46.885 [INFO][4472] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-b-419a632674' Nov 24 07:01:47.277737 containerd[1499]: 2025-11-24 07:01:46.957 [INFO][4472] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4d3761006ef7ec79ce66e4bd94944cb7c8b01ba82ec80c608694bcfb2f462bb8" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:47.277737 containerd[1499]: 2025-11-24 07:01:47.013 [INFO][4472] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-b-419a632674" Nov 24 07:01:47.277737 containerd[1499]: 2025-11-24 07:01:47.029 [INFO][4472] ipam/ipam.go 511: Trying affinity for 192.168.18.192/26 host="ci-4459.2.1-b-419a632674" Nov 24 07:01:47.277737 containerd[1499]: 2025-11-24 07:01:47.049 [INFO][4472] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.192/26 host="ci-4459.2.1-b-419a632674" Nov 24 07:01:47.277737 containerd[1499]: 2025-11-24 07:01:47.055 [INFO][4472] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="ci-4459.2.1-b-419a632674" Nov 24 07:01:47.277737 containerd[1499]: 2025-11-24 07:01:47.055 [INFO][4472] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.4d3761006ef7ec79ce66e4bd94944cb7c8b01ba82ec80c608694bcfb2f462bb8" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:47.277737 containerd[1499]: 2025-11-24 07:01:47.060 [INFO][4472] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4d3761006ef7ec79ce66e4bd94944cb7c8b01ba82ec80c608694bcfb2f462bb8 Nov 24 07:01:47.277737 containerd[1499]: 2025-11-24 07:01:47.099 [INFO][4472] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.4d3761006ef7ec79ce66e4bd94944cb7c8b01ba82ec80c608694bcfb2f462bb8" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:47.277737 containerd[1499]: 2025-11-24 07:01:47.183 [INFO][4472] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.18.200/26] block=192.168.18.192/26 handle="k8s-pod-network.4d3761006ef7ec79ce66e4bd94944cb7c8b01ba82ec80c608694bcfb2f462bb8" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:47.277737 containerd[1499]: 2025-11-24 07:01:47.183 [INFO][4472] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.200/26] handle="k8s-pod-network.4d3761006ef7ec79ce66e4bd94944cb7c8b01ba82ec80c608694bcfb2f462bb8" host="ci-4459.2.1-b-419a632674" Nov 24 07:01:47.277737 containerd[1499]: 2025-11-24 07:01:47.183 [INFO][4472] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 07:01:47.277737 containerd[1499]: 2025-11-24 07:01:47.183 [INFO][4472] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.18.200/26] IPv6=[] ContainerID="4d3761006ef7ec79ce66e4bd94944cb7c8b01ba82ec80c608694bcfb2f462bb8" HandleID="k8s-pod-network.4d3761006ef7ec79ce66e4bd94944cb7c8b01ba82ec80c608694bcfb2f462bb8" Workload="ci--4459.2.1--b--419a632674-k8s-goldmane--666569f655--9pgxc-eth0" Nov 24 07:01:47.278548 containerd[1499]: 2025-11-24 07:01:47.196 [INFO][4443] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4d3761006ef7ec79ce66e4bd94944cb7c8b01ba82ec80c608694bcfb2f462bb8" Namespace="calico-system" Pod="goldmane-666569f655-9pgxc" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-goldmane--666569f655--9pgxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--b--419a632674-k8s-goldmane--666569f655--9pgxc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"246e1c4c-d135-4d73-8092-61385bbba6cb", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 7, 1, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-b-419a632674", ContainerID:"", Pod:"goldmane-666569f655-9pgxc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.18.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5c897de805b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 07:01:47.278548 containerd[1499]: 2025-11-24 07:01:47.196 [INFO][4443] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.200/32] ContainerID="4d3761006ef7ec79ce66e4bd94944cb7c8b01ba82ec80c608694bcfb2f462bb8" Namespace="calico-system" Pod="goldmane-666569f655-9pgxc" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-goldmane--666569f655--9pgxc-eth0" Nov 24 07:01:47.278548 containerd[1499]: 2025-11-24 07:01:47.196 [INFO][4443] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c897de805b ContainerID="4d3761006ef7ec79ce66e4bd94944cb7c8b01ba82ec80c608694bcfb2f462bb8" Namespace="calico-system" Pod="goldmane-666569f655-9pgxc" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-goldmane--666569f655--9pgxc-eth0" Nov 24 07:01:47.278548 containerd[1499]: 2025-11-24 07:01:47.219 [INFO][4443] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d3761006ef7ec79ce66e4bd94944cb7c8b01ba82ec80c608694bcfb2f462bb8" Namespace="calico-system" Pod="goldmane-666569f655-9pgxc" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-goldmane--666569f655--9pgxc-eth0" Nov 24 07:01:47.278548 containerd[1499]: 2025-11-24 07:01:47.229 [INFO][4443] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4d3761006ef7ec79ce66e4bd94944cb7c8b01ba82ec80c608694bcfb2f462bb8" Namespace="calico-system" Pod="goldmane-666569f655-9pgxc" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-goldmane--666569f655--9pgxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--b--419a632674-k8s-goldmane--666569f655--9pgxc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"246e1c4c-d135-4d73-8092-61385bbba6cb", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 7, 1, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-b-419a632674", ContainerID:"4d3761006ef7ec79ce66e4bd94944cb7c8b01ba82ec80c608694bcfb2f462bb8", Pod:"goldmane-666569f655-9pgxc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.18.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5c897de805b", MAC:"32:0b:c0:24:35:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 07:01:47.278548 containerd[1499]: 2025-11-24 07:01:47.268 [INFO][4443] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4d3761006ef7ec79ce66e4bd94944cb7c8b01ba82ec80c608694bcfb2f462bb8" Namespace="calico-system" Pod="goldmane-666569f655-9pgxc" WorkloadEndpoint="ci--4459.2.1--b--419a632674-k8s-goldmane--666569f655--9pgxc-eth0" Nov 24 07:01:47.278548 containerd[1499]: time="2025-11-24T07:01:47.278422899Z" level=info msg="Container 83a4a71d6fc6c4225af1a441bb3d77b938bede8e663f942d6ec7e762be7c4075: CDI devices from CRI Config.CDIDevices: []" Nov 24 07:01:47.297305 containerd[1499]: time="2025-11-24T07:01:47.297200618Z" level=info msg="CreateContainer within sandbox \"3ed15162805c66f2f769ea58fb0a5b1535941612b7e6857e81ade9bf3ae1788d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"83a4a71d6fc6c4225af1a441bb3d77b938bede8e663f942d6ec7e762be7c4075\"" Nov 24 07:01:47.299257 containerd[1499]: time="2025-11-24T07:01:47.299212606Z" level=info msg="StartContainer for \"83a4a71d6fc6c4225af1a441bb3d77b938bede8e663f942d6ec7e762be7c4075\"" Nov 24 07:01:47.303157 containerd[1499]: time="2025-11-24T07:01:47.303045448Z" level=info msg="connecting to shim 83a4a71d6fc6c4225af1a441bb3d77b938bede8e663f942d6ec7e762be7c4075" address="unix:///run/containerd/s/6dc5b3c3ee4dff98e378a1fb0fbb476f33384740e62b43634429e91458f65007" protocol=ttrpc version=3 Nov 24 07:01:47.366328 systemd[1]: Started cri-containerd-83a4a71d6fc6c4225af1a441bb3d77b938bede8e663f942d6ec7e762be7c4075.scope - libcontainer container 83a4a71d6fc6c4225af1a441bb3d77b938bede8e663f942d6ec7e762be7c4075. Nov 24 07:01:47.377443 containerd[1499]: time="2025-11-24T07:01:47.377374911Z" level=info msg="connecting to shim 4d3761006ef7ec79ce66e4bd94944cb7c8b01ba82ec80c608694bcfb2f462bb8" address="unix:///run/containerd/s/1db40efcf893a7cfe759c3bdbc84e0a47c8912865950b44176df2b3dbfd878a0" namespace=k8s.io protocol=ttrpc version=3 Nov 24 07:01:47.446660 systemd[1]: Started cri-containerd-4d3761006ef7ec79ce66e4bd94944cb7c8b01ba82ec80c608694bcfb2f462bb8.scope - libcontainer container 4d3761006ef7ec79ce66e4bd94944cb7c8b01ba82ec80c608694bcfb2f462bb8. Nov 24 07:01:47.498074 containerd[1499]: time="2025-11-24T07:01:47.498023706Z" level=info msg="StartContainer for \"83a4a71d6fc6c4225af1a441bb3d77b938bede8e663f942d6ec7e762be7c4075\" returns successfully" Nov 24 07:01:47.513151 systemd-networkd[1423]: cali0e183e6bec3: Gained IPv6LL Nov 24 07:01:47.734186 containerd[1499]: time="2025-11-24T07:01:47.734100505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9pgxc,Uid:246e1c4c-d135-4d73-8092-61385bbba6cb,Namespace:calico-system,Attempt:0,} returns sandbox id \"4d3761006ef7ec79ce66e4bd94944cb7c8b01ba82ec80c608694bcfb2f462bb8\"" Nov 24 07:01:47.739914 containerd[1499]: time="2025-11-24T07:01:47.739855212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 07:01:47.872706 kubelet[2674]: E1124 07:01:47.872482 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:47.879826 kubelet[2674]: E1124 07:01:47.879152 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:47.881103 kubelet[2674]: E1124 07:01:47.881032 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b7cb7b4d-2jnsq" podUID="e713eb78-8f4a-4fad-881a-0e37cd3c7e10" Nov 24 07:01:47.956304 kubelet[2674]: I1124 07:01:47.956050 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qjngn" podStartSLOduration=44.95602116 podStartE2EDuration="44.95602116s" podCreationTimestamp="2025-11-24 07:01:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 07:01:47.9105558 +0000 UTC m=+48.574720647" watchObservedRunningTime="2025-11-24 07:01:47.95602116 +0000 UTC m=+48.620186004" Nov 24 07:01:48.098313 containerd[1499]: time="2025-11-24T07:01:48.098092092Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:01:48.102343 containerd[1499]: time="2025-11-24T07:01:48.102149277Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 07:01:48.102343 containerd[1499]: time="2025-11-24T07:01:48.102294682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 07:01:48.103058 kubelet[2674]: E1124 07:01:48.102549 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 07:01:48.103881 kubelet[2674]: E1124 07:01:48.102967 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 07:01:48.104288 kubelet[2674]: E1124 07:01:48.104222 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jkftp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9pgxc_calico-system(246e1c4c-d135-4d73-8092-61385bbba6cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 07:01:48.106138 kubelet[2674]: E1124 07:01:48.106073 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9pgxc" podUID="246e1c4c-d135-4d73-8092-61385bbba6cb" Nov 24 07:01:48.473140 systemd-networkd[1423]: cali5c897de805b: Gained IPv6LL Nov 24 07:01:48.883407 kubelet[2674]: E1124 07:01:48.883361 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:48.885340 kubelet[2674]: E1124 07:01:48.885247 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:48.888249 kubelet[2674]: E1124 07:01:48.888205 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9pgxc" podUID="246e1c4c-d135-4d73-8092-61385bbba6cb" Nov 24 07:01:48.921634 systemd-networkd[1423]: calia121cab4200: Gained IPv6LL Nov 24 07:01:49.557342 kubelet[2674]: I1124 07:01:49.556874 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 07:01:49.557342 kubelet[2674]: E1124 07:01:49.557334 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:49.886182 kubelet[2674]: E1124 07:01:49.885025 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:49.887405 kubelet[2674]: E1124 07:01:49.887287 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:50.947227 systemd-networkd[1423]: vxlan.calico: Link UP Nov 24 07:01:50.947234 systemd-networkd[1423]: vxlan.calico: Gained carrier Nov 24 07:01:52.569232 systemd-networkd[1423]: vxlan.calico: Gained IPv6LL Nov 24 07:01:54.529120 containerd[1499]: time="2025-11-24T07:01:54.529020046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 07:01:54.845351 containerd[1499]: time="2025-11-24T07:01:54.844878498Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:01:54.846458 containerd[1499]: time="2025-11-24T07:01:54.846261682Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 07:01:54.846458 containerd[1499]: time="2025-11-24T07:01:54.846423680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 07:01:54.846949 kubelet[2674]: E1124 07:01:54.846641 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 07:01:54.846949 kubelet[2674]: E1124 07:01:54.846727 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 07:01:54.848450 kubelet[2674]: E1124 07:01:54.848314 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4b7086ed6818482a82815cc41f7813f5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7c6h5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-865bc4d9cd-wsm2s_calico-system(45848e32-6c02-41b9-837d-21663011857a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 07:01:54.852419 containerd[1499]: time="2025-11-24T07:01:54.852373585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 07:01:55.192744 containerd[1499]: time="2025-11-24T07:01:55.192099175Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:01:55.193188 containerd[1499]: time="2025-11-24T07:01:55.193133638Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 07:01:55.193512 containerd[1499]: time="2025-11-24T07:01:55.193234532Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 07:01:55.193591 kubelet[2674]: E1124 07:01:55.193461 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 07:01:55.193591 kubelet[2674]: E1124 07:01:55.193516 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 07:01:55.193866 kubelet[2674]: E1124 07:01:55.193803 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7c6h5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-865bc4d9cd-wsm2s_calico-system(45848e32-6c02-41b9-837d-21663011857a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 07:01:55.195093 kubelet[2674]: E1124 07:01:55.195042 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-865bc4d9cd-wsm2s" podUID="45848e32-6c02-41b9-837d-21663011857a" Nov 24 07:01:56.531609 containerd[1499]: time="2025-11-24T07:01:56.531494613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 07:01:56.855498 containerd[1499]: time="2025-11-24T07:01:56.855265544Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:01:56.856162 containerd[1499]: time="2025-11-24T07:01:56.856072007Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 07:01:56.856472 containerd[1499]: time="2025-11-24T07:01:56.856219096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 07:01:56.856871 kubelet[2674]: E1124 07:01:56.856497 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 07:01:56.856871 kubelet[2674]: E1124 07:01:56.856584 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 07:01:56.856871 kubelet[2674]: E1124 07:01:56.856794 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfsxg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xq9xq_calico-system(c34e73b2-4364-4998-a5a0-398cc36c9e15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 07:01:56.862052 containerd[1499]: time="2025-11-24T07:01:56.861977060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 07:01:57.154502 containerd[1499]: time="2025-11-24T07:01:57.154036511Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:01:57.156024 containerd[1499]: time="2025-11-24T07:01:57.155926319Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 07:01:57.156458 containerd[1499]: time="2025-11-24T07:01:57.155951709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 07:01:57.157682 kubelet[2674]: E1124 07:01:57.157177 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 07:01:57.157682 kubelet[2674]: E1124 07:01:57.157264 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 07:01:57.157682 kubelet[2674]: E1124 07:01:57.157439 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfsxg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xq9xq_calico-system(c34e73b2-4364-4998-a5a0-398cc36c9e15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 07:01:57.158932 kubelet[2674]: E1124 07:01:57.158873 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xq9xq" podUID="c34e73b2-4364-4998-a5a0-398cc36c9e15" Nov 24 07:01:57.599711 systemd[1]: Started sshd@7-24.144.92.64:22-139.178.68.195:37322.service - OpenSSH per-connection server daemon (139.178.68.195:37322). Nov 24 07:01:57.734254 sshd[4840]: Accepted publickey for core from 139.178.68.195 port 37322 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:01:57.737042 sshd-session[4840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:01:57.745091 systemd-logind[1472]: New session 8 of user core. Nov 24 07:01:57.759284 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 24 07:01:58.277785 sshd[4843]: Connection closed by 139.178.68.195 port 37322 Nov 24 07:01:58.278997 sshd-session[4840]: pam_unix(sshd:session): session closed for user core Nov 24 07:01:58.289598 systemd[1]: sshd@7-24.144.92.64:22-139.178.68.195:37322.service: Deactivated successfully. Nov 24 07:01:58.294447 systemd[1]: session-8.scope: Deactivated successfully. Nov 24 07:01:58.297493 systemd-logind[1472]: Session 8 logged out. Waiting for processes to exit. Nov 24 07:01:58.300697 systemd-logind[1472]: Removed session 8. Nov 24 07:01:58.528888 containerd[1499]: time="2025-11-24T07:01:58.528618970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 07:01:58.917253 containerd[1499]: time="2025-11-24T07:01:58.916952339Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:01:58.918229 containerd[1499]: time="2025-11-24T07:01:58.918093626Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 07:01:58.918229 containerd[1499]: time="2025-11-24T07:01:58.918149497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 07:01:58.918560 kubelet[2674]: E1124 07:01:58.918518 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 07:01:58.919338 kubelet[2674]: E1124 07:01:58.919054 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 07:01:58.919338 kubelet[2674]: E1124 07:01:58.919242 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8z984,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7f9b89cb9c-ljkq4_calico-system(e3e5450f-313c-477b-87ac-4da097ca2eb2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 07:01:58.920577 kubelet[2674]: E1124 07:01:58.920470 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f9b89cb9c-ljkq4" podUID="e3e5450f-313c-477b-87ac-4da097ca2eb2" Nov 24 07:01:59.530939 containerd[1499]: time="2025-11-24T07:01:59.530410036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 07:01:59.859597 containerd[1499]: time="2025-11-24T07:01:59.859176862Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:01:59.861050 containerd[1499]: time="2025-11-24T07:01:59.860852677Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 07:01:59.861050 containerd[1499]: time="2025-11-24T07:01:59.860876655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 07:01:59.861255 kubelet[2674]: E1124 07:01:59.861168 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:01:59.861255 kubelet[2674]: E1124 07:01:59.861237 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:01:59.861452 kubelet[2674]: E1124 07:01:59.861396 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9cfnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66b7cb7b4d-2jnsq_calico-apiserver(e713eb78-8f4a-4fad-881a-0e37cd3c7e10): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 07:01:59.862921 kubelet[2674]: E1124 07:01:59.862833 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b7cb7b4d-2jnsq" podUID="e713eb78-8f4a-4fad-881a-0e37cd3c7e10" Nov 24 07:02:00.530795 containerd[1499]: time="2025-11-24T07:02:00.530446539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 07:02:00.871645 containerd[1499]: time="2025-11-24T07:02:00.871495183Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:02:00.872930 containerd[1499]: time="2025-11-24T07:02:00.872818842Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 07:02:00.873136 containerd[1499]: time="2025-11-24T07:02:00.873067555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 07:02:00.873550 kubelet[2674]: E1124 07:02:00.873329 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:02:00.873550 kubelet[2674]: E1124 07:02:00.873389 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:02:00.874226 kubelet[2674]: E1124 07:02:00.873999 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cwn5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66b7cb7b4d-lbdp7_calico-apiserver(8bd2d503-fa71-45be-9f52-ae92f15b3067): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 07:02:00.875633 kubelet[2674]: E1124 07:02:00.875512 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b7cb7b4d-lbdp7" podUID="8bd2d503-fa71-45be-9f52-ae92f15b3067" Nov 24 07:02:03.294737 systemd[1]: Started sshd@8-24.144.92.64:22-139.178.68.195:41814.service - OpenSSH per-connection server daemon (139.178.68.195:41814). Nov 24 07:02:03.369685 sshd[4867]: Accepted publickey for core from 139.178.68.195 port 41814 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:02:03.371583 sshd-session[4867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:02:03.379399 systemd-logind[1472]: New session 9 of user core. Nov 24 07:02:03.386215 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 24 07:02:03.537699 containerd[1499]: time="2025-11-24T07:02:03.537642088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 07:02:03.568067 sshd[4870]: Connection closed by 139.178.68.195 port 41814 Nov 24 07:02:03.567287 sshd-session[4867]: pam_unix(sshd:session): session closed for user core Nov 24 07:02:03.573055 systemd-logind[1472]: Session 9 logged out. Waiting for processes to exit. Nov 24 07:02:03.573535 systemd[1]: sshd@8-24.144.92.64:22-139.178.68.195:41814.service: Deactivated successfully. Nov 24 07:02:03.578945 systemd[1]: session-9.scope: Deactivated successfully. Nov 24 07:02:03.584881 systemd-logind[1472]: Removed session 9. Nov 24 07:02:03.855151 containerd[1499]: time="2025-11-24T07:02:03.854737561Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:02:03.856494 containerd[1499]: time="2025-11-24T07:02:03.856409243Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 07:02:03.856765 containerd[1499]: time="2025-11-24T07:02:03.856480100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 07:02:03.857173 kubelet[2674]: E1124 07:02:03.857085 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 07:02:03.858320 kubelet[2674]: E1124 07:02:03.857193 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 07:02:03.858320 kubelet[2674]: E1124 07:02:03.857491 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jkftp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9pgxc_calico-system(246e1c4c-d135-4d73-8092-61385bbba6cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 07:02:03.859524 kubelet[2674]: E1124 07:02:03.859231 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9pgxc" podUID="246e1c4c-d135-4d73-8092-61385bbba6cb" Nov 24 07:02:06.530560 kubelet[2674]: E1124 07:02:06.530488 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-865bc4d9cd-wsm2s" podUID="45848e32-6c02-41b9-837d-21663011857a" Nov 24 07:02:08.588215 systemd[1]: Started sshd@9-24.144.92.64:22-139.178.68.195:41818.service - OpenSSH per-connection server daemon (139.178.68.195:41818). Nov 24 07:02:08.653744 sshd[4884]: Accepted publickey for core from 139.178.68.195 port 41818 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:02:08.657047 sshd-session[4884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:02:08.667768 systemd-logind[1472]: New session 10 of user core. Nov 24 07:02:08.673184 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 24 07:02:08.851035 sshd[4887]: Connection closed by 139.178.68.195 port 41818 Nov 24 07:02:08.851252 sshd-session[4884]: pam_unix(sshd:session): session closed for user core Nov 24 07:02:08.868315 systemd[1]: sshd@9-24.144.92.64:22-139.178.68.195:41818.service: Deactivated successfully. Nov 24 07:02:08.873794 systemd[1]: session-10.scope: Deactivated successfully. Nov 24 07:02:08.875518 systemd-logind[1472]: Session 10 logged out. Waiting for processes to exit. Nov 24 07:02:08.882752 systemd[1]: Started sshd@10-24.144.92.64:22-139.178.68.195:41834.service - OpenSSH per-connection server daemon (139.178.68.195:41834). Nov 24 07:02:08.884498 systemd-logind[1472]: Removed session 10. Nov 24 07:02:08.955858 sshd[4899]: Accepted publickey for core from 139.178.68.195 port 41834 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:02:08.958083 sshd-session[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:02:08.966690 systemd-logind[1472]: New session 11 of user core. Nov 24 07:02:08.977245 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 24 07:02:09.192504 sshd[4902]: Connection closed by 139.178.68.195 port 41834 Nov 24 07:02:09.194356 sshd-session[4899]: pam_unix(sshd:session): session closed for user core Nov 24 07:02:09.212973 systemd[1]: sshd@10-24.144.92.64:22-139.178.68.195:41834.service: Deactivated successfully. Nov 24 07:02:09.218568 systemd[1]: session-11.scope: Deactivated successfully. Nov 24 07:02:09.221852 systemd-logind[1472]: Session 11 logged out. Waiting for processes to exit. Nov 24 07:02:09.228198 systemd[1]: Started sshd@11-24.144.92.64:22-139.178.68.195:41844.service - OpenSSH per-connection server daemon (139.178.68.195:41844). Nov 24 07:02:09.233076 systemd-logind[1472]: Removed session 11. Nov 24 07:02:09.335829 sshd[4913]: Accepted publickey for core from 139.178.68.195 port 41844 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:02:09.337788 sshd-session[4913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:02:09.344508 systemd-logind[1472]: New session 12 of user core. Nov 24 07:02:09.348132 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 24 07:02:09.543111 sshd[4916]: Connection closed by 139.178.68.195 port 41844 Nov 24 07:02:09.544138 sshd-session[4913]: pam_unix(sshd:session): session closed for user core Nov 24 07:02:09.549185 systemd[1]: sshd@11-24.144.92.64:22-139.178.68.195:41844.service: Deactivated successfully. Nov 24 07:02:09.552445 systemd[1]: session-12.scope: Deactivated successfully. Nov 24 07:02:09.555859 systemd-logind[1472]: Session 12 logged out. Waiting for processes to exit. Nov 24 07:02:09.557446 systemd-logind[1472]: Removed session 12. Nov 24 07:02:11.530949 kubelet[2674]: E1124 07:02:11.530633 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f9b89cb9c-ljkq4" podUID="e3e5450f-313c-477b-87ac-4da097ca2eb2" Nov 24 07:02:12.533494 kubelet[2674]: E1124 07:02:12.533394 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xq9xq" podUID="c34e73b2-4364-4998-a5a0-398cc36c9e15" Nov 24 07:02:13.530379 kubelet[2674]: E1124 07:02:13.530270 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b7cb7b4d-lbdp7" podUID="8bd2d503-fa71-45be-9f52-ae92f15b3067" Nov 24 07:02:14.530174 kubelet[2674]: E1124 07:02:14.529282 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b7cb7b4d-2jnsq" podUID="e713eb78-8f4a-4fad-881a-0e37cd3c7e10" Nov 24 07:02:14.568430 systemd[1]: Started sshd@12-24.144.92.64:22-139.178.68.195:51184.service - OpenSSH per-connection server daemon (139.178.68.195:51184). Nov 24 07:02:14.700916 sshd[4966]: Accepted publickey for core from 139.178.68.195 port 51184 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:02:14.703872 sshd-session[4966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:02:14.710608 systemd-logind[1472]: New session 13 of user core. Nov 24 07:02:14.717178 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 24 07:02:14.888273 sshd[4969]: Connection closed by 139.178.68.195 port 51184 Nov 24 07:02:14.889251 sshd-session[4966]: pam_unix(sshd:session): session closed for user core Nov 24 07:02:14.895858 systemd[1]: sshd@12-24.144.92.64:22-139.178.68.195:51184.service: Deactivated successfully. Nov 24 07:02:14.901480 systemd[1]: session-13.scope: Deactivated successfully. Nov 24 07:02:14.904826 systemd-logind[1472]: Session 13 logged out. Waiting for processes to exit. Nov 24 07:02:14.908259 systemd-logind[1472]: Removed session 13. Nov 24 07:02:17.529070 kubelet[2674]: E1124 07:02:17.528277 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:02:17.532370 kubelet[2674]: E1124 07:02:17.532292 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9pgxc" podUID="246e1c4c-d135-4d73-8092-61385bbba6cb" Nov 24 07:02:19.910394 systemd[1]: Started sshd@13-24.144.92.64:22-139.178.68.195:51196.service - OpenSSH per-connection server daemon (139.178.68.195:51196). Nov 24 07:02:19.980924 sshd[4980]: Accepted publickey for core from 139.178.68.195 port 51196 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:02:19.982862 sshd-session[4980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:02:19.993304 systemd-logind[1472]: New session 14 of user core. Nov 24 07:02:19.997719 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 24 07:02:20.172059 sshd[4983]: Connection closed by 139.178.68.195 port 51196 Nov 24 07:02:20.172681 sshd-session[4980]: pam_unix(sshd:session): session closed for user core Nov 24 07:02:20.180622 systemd[1]: sshd@13-24.144.92.64:22-139.178.68.195:51196.service: Deactivated successfully. Nov 24 07:02:20.185308 systemd[1]: session-14.scope: Deactivated successfully. Nov 24 07:02:20.188583 systemd-logind[1472]: Session 14 logged out. Waiting for processes to exit. Nov 24 07:02:20.191463 systemd-logind[1472]: Removed session 14. Nov 24 07:02:20.531325 containerd[1499]: time="2025-11-24T07:02:20.531260391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 07:02:20.863366 containerd[1499]: time="2025-11-24T07:02:20.863069346Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:02:20.865962 containerd[1499]: time="2025-11-24T07:02:20.864155070Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 07:02:20.866535 kubelet[2674]: E1124 07:02:20.866422 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 07:02:20.866535 kubelet[2674]: E1124 07:02:20.866499 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 07:02:20.867582 kubelet[2674]: E1124 07:02:20.867201 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4b7086ed6818482a82815cc41f7813f5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7c6h5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-865bc4d9cd-wsm2s_calico-system(45848e32-6c02-41b9-837d-21663011857a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 07:02:20.884487 containerd[1499]: time="2025-11-24T07:02:20.864295640Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 07:02:20.884663 containerd[1499]: time="2025-11-24T07:02:20.869684703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 07:02:21.230303 containerd[1499]: time="2025-11-24T07:02:21.230116988Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:02:21.231118 containerd[1499]: time="2025-11-24T07:02:21.230995861Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 07:02:21.231118 containerd[1499]: time="2025-11-24T07:02:21.231067592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 07:02:21.231675 kubelet[2674]: E1124 07:02:21.231576 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 07:02:21.231881 kubelet[2674]: E1124 07:02:21.231838 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 07:02:21.232222 kubelet[2674]: E1124 07:02:21.232160 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7c6h5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-865bc4d9cd-wsm2s_calico-system(45848e32-6c02-41b9-837d-21663011857a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 07:02:21.234452 kubelet[2674]: E1124 07:02:21.234385 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-865bc4d9cd-wsm2s" podUID="45848e32-6c02-41b9-837d-21663011857a" Nov 24 07:02:24.530361 containerd[1499]: time="2025-11-24T07:02:24.530259827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 07:02:24.882590 containerd[1499]: time="2025-11-24T07:02:24.882306893Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:02:24.883334 containerd[1499]: time="2025-11-24T07:02:24.883201442Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 07:02:24.883334 containerd[1499]: time="2025-11-24T07:02:24.883300452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 07:02:24.883575 kubelet[2674]: E1124 07:02:24.883483 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 07:02:24.883575 kubelet[2674]: E1124 07:02:24.883537 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 07:02:24.884450 kubelet[2674]: E1124 07:02:24.884104 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfsxg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xq9xq_calico-system(c34e73b2-4364-4998-a5a0-398cc36c9e15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 07:02:24.884567 containerd[1499]: time="2025-11-24T07:02:24.883887946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 07:02:25.190474 systemd[1]: Started sshd@14-24.144.92.64:22-139.178.68.195:42160.service - OpenSSH per-connection server daemon (139.178.68.195:42160). Nov 24 07:02:25.281555 containerd[1499]: time="2025-11-24T07:02:25.281448310Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:02:25.283114 containerd[1499]: time="2025-11-24T07:02:25.283028408Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 07:02:25.283114 containerd[1499]: time="2025-11-24T07:02:25.283076440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 07:02:25.283490 kubelet[2674]: E1124 07:02:25.283347 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:02:25.283490 kubelet[2674]: E1124 07:02:25.283414 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:02:25.284373 kubelet[2674]: E1124 07:02:25.284244 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cwn5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66b7cb7b4d-lbdp7_calico-apiserver(8bd2d503-fa71-45be-9f52-ae92f15b3067): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 07:02:25.285326 containerd[1499]: time="2025-11-24T07:02:25.284223016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 07:02:25.286655 kubelet[2674]: E1124 07:02:25.286605 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b7cb7b4d-lbdp7" podUID="8bd2d503-fa71-45be-9f52-ae92f15b3067" Nov 24 07:02:25.338891 sshd[4995]: Accepted publickey for core from 139.178.68.195 port 42160 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:02:25.342793 sshd-session[4995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:02:25.350767 systemd-logind[1472]: New session 15 of user core. Nov 24 07:02:25.360215 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 24 07:02:25.611804 containerd[1499]: time="2025-11-24T07:02:25.611591891Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:02:25.613706 containerd[1499]: time="2025-11-24T07:02:25.613434463Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 07:02:25.613706 containerd[1499]: time="2025-11-24T07:02:25.613638478Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 07:02:25.614380 kubelet[2674]: E1124 07:02:25.613880 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 07:02:25.614380 kubelet[2674]: E1124 07:02:25.614015 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 07:02:25.614380 kubelet[2674]: E1124 07:02:25.614189 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfsxg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xq9xq_calico-system(c34e73b2-4364-4998-a5a0-398cc36c9e15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 07:02:25.616932 kubelet[2674]: E1124 07:02:25.616081 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xq9xq" podUID="c34e73b2-4364-4998-a5a0-398cc36c9e15" Nov 24 07:02:25.654913 sshd[4998]: Connection closed by 139.178.68.195 port 42160 Nov 24 07:02:25.655765 sshd-session[4995]: pam_unix(sshd:session): session closed for user core Nov 24 07:02:25.663863 systemd[1]: sshd@14-24.144.92.64:22-139.178.68.195:42160.service: Deactivated successfully. Nov 24 07:02:25.669341 systemd[1]: session-15.scope: Deactivated successfully. Nov 24 07:02:25.673119 systemd-logind[1472]: Session 15 logged out. Waiting for processes to exit. Nov 24 07:02:25.675830 systemd-logind[1472]: Removed session 15. Nov 24 07:02:26.532440 containerd[1499]: time="2025-11-24T07:02:26.532252191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 07:02:26.907758 containerd[1499]: time="2025-11-24T07:02:26.907508776Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:02:26.908614 containerd[1499]: time="2025-11-24T07:02:26.908544374Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 07:02:26.909384 containerd[1499]: time="2025-11-24T07:02:26.908675812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 07:02:26.909459 kubelet[2674]: E1124 07:02:26.908892 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 07:02:26.909459 kubelet[2674]: E1124 07:02:26.908993 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 07:02:26.910957 kubelet[2674]: E1124 07:02:26.910174 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8z984,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7f9b89cb9c-ljkq4_calico-system(e3e5450f-313c-477b-87ac-4da097ca2eb2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 07:02:26.912396 kubelet[2674]: E1124 07:02:26.912296 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f9b89cb9c-ljkq4" podUID="e3e5450f-313c-477b-87ac-4da097ca2eb2" Nov 24 07:02:29.530721 containerd[1499]: time="2025-11-24T07:02:29.530650596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 07:02:29.880295 containerd[1499]: time="2025-11-24T07:02:29.880096934Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:02:29.881456 containerd[1499]: time="2025-11-24T07:02:29.881377181Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 07:02:29.881456 containerd[1499]: time="2025-11-24T07:02:29.881417121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 07:02:29.881846 kubelet[2674]: E1124 07:02:29.881758 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:02:29.881846 kubelet[2674]: E1124 07:02:29.881820 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:02:29.882557 kubelet[2674]: E1124 07:02:29.881969 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9cfnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66b7cb7b4d-2jnsq_calico-apiserver(e713eb78-8f4a-4fad-881a-0e37cd3c7e10): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 07:02:29.883366 kubelet[2674]: E1124 07:02:29.883254 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b7cb7b4d-2jnsq" podUID="e713eb78-8f4a-4fad-881a-0e37cd3c7e10" Nov 24 07:02:30.670048 systemd[1]: Started sshd@15-24.144.92.64:22-139.178.68.195:46250.service - OpenSSH per-connection server daemon (139.178.68.195:46250). Nov 24 07:02:30.750111 sshd[5010]: Accepted publickey for core from 139.178.68.195 port 46250 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:02:30.751863 sshd-session[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:02:30.758669 systemd-logind[1472]: New session 16 of user core. Nov 24 07:02:30.766237 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 24 07:02:30.924517 sshd[5013]: Connection closed by 139.178.68.195 port 46250 Nov 24 07:02:30.925370 sshd-session[5010]: pam_unix(sshd:session): session closed for user core Nov 24 07:02:30.937687 systemd[1]: sshd@15-24.144.92.64:22-139.178.68.195:46250.service: Deactivated successfully. Nov 24 07:02:30.940605 systemd[1]: session-16.scope: Deactivated successfully. Nov 24 07:02:30.943590 systemd-logind[1472]: Session 16 logged out. Waiting for processes to exit. Nov 24 07:02:30.947079 systemd[1]: Started sshd@16-24.144.92.64:22-139.178.68.195:46252.service - OpenSSH per-connection server daemon (139.178.68.195:46252). Nov 24 07:02:30.949817 systemd-logind[1472]: Removed session 16. Nov 24 07:02:31.017280 sshd[5024]: Accepted publickey for core from 139.178.68.195 port 46252 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:02:31.019455 sshd-session[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:02:31.026100 systemd-logind[1472]: New session 17 of user core. Nov 24 07:02:31.046210 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 24 07:02:31.380035 sshd[5027]: Connection closed by 139.178.68.195 port 46252 Nov 24 07:02:31.380762 sshd-session[5024]: pam_unix(sshd:session): session closed for user core Nov 24 07:02:31.398812 systemd[1]: sshd@16-24.144.92.64:22-139.178.68.195:46252.service: Deactivated successfully. Nov 24 07:02:31.403208 systemd[1]: session-17.scope: Deactivated successfully. Nov 24 07:02:31.407131 systemd-logind[1472]: Session 17 logged out. Waiting for processes to exit. Nov 24 07:02:31.417047 systemd[1]: Started sshd@17-24.144.92.64:22-139.178.68.195:46268.service - OpenSSH per-connection server daemon (139.178.68.195:46268). Nov 24 07:02:31.419515 systemd-logind[1472]: Removed session 17. Nov 24 07:02:31.506360 sshd[5043]: Accepted publickey for core from 139.178.68.195 port 46268 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:02:31.508547 sshd-session[5043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:02:31.515189 systemd-logind[1472]: New session 18 of user core. Nov 24 07:02:31.525320 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 24 07:02:31.530840 kubelet[2674]: E1124 07:02:31.530367 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:02:31.534717 containerd[1499]: time="2025-11-24T07:02:31.532320372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 07:02:31.902368 containerd[1499]: time="2025-11-24T07:02:31.902245629Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:02:31.905659 containerd[1499]: time="2025-11-24T07:02:31.905530378Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 07:02:31.905951 containerd[1499]: time="2025-11-24T07:02:31.905628668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 07:02:31.906193 kubelet[2674]: E1124 07:02:31.906130 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 07:02:31.906337 kubelet[2674]: E1124 07:02:31.906211 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 07:02:31.906444 kubelet[2674]: E1124 07:02:31.906389 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jkftp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9pgxc_calico-system(246e1c4c-d135-4d73-8092-61385bbba6cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 07:02:31.907703 kubelet[2674]: E1124 07:02:31.907655 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9pgxc" podUID="246e1c4c-d135-4d73-8092-61385bbba6cb" Nov 24 07:02:32.311705 sshd[5048]: Connection closed by 139.178.68.195 port 46268 Nov 24 07:02:32.313801 sshd-session[5043]: pam_unix(sshd:session): session closed for user core Nov 24 07:02:32.335365 systemd[1]: sshd@17-24.144.92.64:22-139.178.68.195:46268.service: Deactivated successfully. Nov 24 07:02:32.340627 systemd[1]: session-18.scope: Deactivated successfully. Nov 24 07:02:32.346230 systemd-logind[1472]: Session 18 logged out. Waiting for processes to exit. Nov 24 07:02:32.353915 systemd[1]: Started sshd@18-24.144.92.64:22-139.178.68.195:46284.service - OpenSSH per-connection server daemon (139.178.68.195:46284). Nov 24 07:02:32.356090 systemd-logind[1472]: Removed session 18. Nov 24 07:02:32.445170 sshd[5064]: Accepted publickey for core from 139.178.68.195 port 46284 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:02:32.447811 sshd-session[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:02:32.456856 systemd-logind[1472]: New session 19 of user core. Nov 24 07:02:32.463210 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 24 07:02:32.951677 sshd[5068]: Connection closed by 139.178.68.195 port 46284 Nov 24 07:02:32.954149 sshd-session[5064]: pam_unix(sshd:session): session closed for user core Nov 24 07:02:32.967669 systemd[1]: sshd@18-24.144.92.64:22-139.178.68.195:46284.service: Deactivated successfully. Nov 24 07:02:32.972606 systemd[1]: session-19.scope: Deactivated successfully. Nov 24 07:02:32.974745 systemd-logind[1472]: Session 19 logged out. Waiting for processes to exit. Nov 24 07:02:32.982036 systemd[1]: Started sshd@19-24.144.92.64:22-139.178.68.195:46292.service - OpenSSH per-connection server daemon (139.178.68.195:46292). Nov 24 07:02:32.983829 systemd-logind[1472]: Removed session 19. Nov 24 07:02:33.104468 sshd[5078]: Accepted publickey for core from 139.178.68.195 port 46292 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:02:33.106642 sshd-session[5078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:02:33.113428 systemd-logind[1472]: New session 20 of user core. Nov 24 07:02:33.119197 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 24 07:02:33.305237 sshd[5081]: Connection closed by 139.178.68.195 port 46292 Nov 24 07:02:33.306010 sshd-session[5078]: pam_unix(sshd:session): session closed for user core Nov 24 07:02:33.311052 systemd-logind[1472]: Session 20 logged out. Waiting for processes to exit. Nov 24 07:02:33.312116 systemd[1]: sshd@19-24.144.92.64:22-139.178.68.195:46292.service: Deactivated successfully. Nov 24 07:02:33.315678 systemd[1]: session-20.scope: Deactivated successfully. Nov 24 07:02:33.318800 systemd-logind[1472]: Removed session 20. Nov 24 07:02:34.528428 kubelet[2674]: E1124 07:02:34.528280 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:02:35.531080 kubelet[2674]: E1124 07:02:35.531011 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-865bc4d9cd-wsm2s" podUID="45848e32-6c02-41b9-837d-21663011857a" Nov 24 07:02:36.533177 kubelet[2674]: E1124 07:02:36.533104 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xq9xq" podUID="c34e73b2-4364-4998-a5a0-398cc36c9e15" Nov 24 07:02:38.326958 systemd[1]: Started sshd@20-24.144.92.64:22-139.178.68.195:46300.service - OpenSSH per-connection server daemon (139.178.68.195:46300). Nov 24 07:02:38.451172 sshd[5095]: Accepted publickey for core from 139.178.68.195 port 46300 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:02:38.453962 sshd-session[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:02:38.461813 systemd-logind[1472]: New session 21 of user core. Nov 24 07:02:38.466241 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 24 07:02:38.531740 kubelet[2674]: E1124 07:02:38.530757 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b7cb7b4d-lbdp7" podUID="8bd2d503-fa71-45be-9f52-ae92f15b3067" Nov 24 07:02:38.732953 sshd[5100]: Connection closed by 139.178.68.195 port 46300 Nov 24 07:02:38.734097 sshd-session[5095]: pam_unix(sshd:session): session closed for user core Nov 24 07:02:38.742879 systemd[1]: sshd@20-24.144.92.64:22-139.178.68.195:46300.service: Deactivated successfully. Nov 24 07:02:38.746312 systemd[1]: session-21.scope: Deactivated successfully. Nov 24 07:02:38.748181 systemd-logind[1472]: Session 21 logged out. Waiting for processes to exit. Nov 24 07:02:38.750371 systemd-logind[1472]: Removed session 21. Nov 24 07:02:40.529738 kubelet[2674]: E1124 07:02:40.529544 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f9b89cb9c-ljkq4" podUID="e3e5450f-313c-477b-87ac-4da097ca2eb2" Nov 24 07:02:41.528948 kubelet[2674]: E1124 07:02:41.527761 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:02:42.529140 kubelet[2674]: E1124 07:02:42.529088 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b7cb7b4d-2jnsq" podUID="e713eb78-8f4a-4fad-881a-0e37cd3c7e10" Nov 24 07:02:43.219103 kubelet[2674]: E1124 07:02:43.219030 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:02:43.756456 systemd[1]: Started sshd@21-24.144.92.64:22-139.178.68.195:53306.service - OpenSSH per-connection server daemon (139.178.68.195:53306). Nov 24 07:02:43.853288 sshd[5137]: Accepted publickey for core from 139.178.68.195 port 53306 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:02:43.856388 sshd-session[5137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:02:43.863571 systemd-logind[1472]: New session 22 of user core. Nov 24 07:02:43.872565 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 24 07:02:44.103732 sshd[5143]: Connection closed by 139.178.68.195 port 53306 Nov 24 07:02:44.105240 sshd-session[5137]: pam_unix(sshd:session): session closed for user core Nov 24 07:02:44.118517 systemd[1]: sshd@21-24.144.92.64:22-139.178.68.195:53306.service: Deactivated successfully. Nov 24 07:02:44.123361 systemd[1]: session-22.scope: Deactivated successfully. Nov 24 07:02:44.127142 systemd-logind[1472]: Session 22 logged out. Waiting for processes to exit. Nov 24 07:02:44.129271 systemd-logind[1472]: Removed session 22. Nov 24 07:02:44.530635 kubelet[2674]: E1124 07:02:44.530055 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9pgxc" podUID="246e1c4c-d135-4d73-8092-61385bbba6cb" Nov 24 07:02:46.532553 kubelet[2674]: E1124 07:02:46.532478 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-865bc4d9cd-wsm2s" podUID="45848e32-6c02-41b9-837d-21663011857a" Nov 24 07:02:49.123191 systemd[1]: Started sshd@22-24.144.92.64:22-139.178.68.195:53320.service - OpenSSH per-connection server daemon (139.178.68.195:53320). Nov 24 07:02:49.215970 sshd[5157]: Accepted publickey for core from 139.178.68.195 port 53320 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:02:49.219627 sshd-session[5157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:02:49.231095 systemd-logind[1472]: New session 23 of user core. Nov 24 07:02:49.242005 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 24 07:02:49.445921 sshd[5160]: Connection closed by 139.178.68.195 port 53320 Nov 24 07:02:49.446466 sshd-session[5157]: pam_unix(sshd:session): session closed for user core Nov 24 07:02:49.453984 systemd[1]: sshd@22-24.144.92.64:22-139.178.68.195:53320.service: Deactivated successfully. Nov 24 07:02:49.459513 systemd[1]: session-23.scope: Deactivated successfully. Nov 24 07:02:49.463105 systemd-logind[1472]: Session 23 logged out. Waiting for processes to exit. Nov 24 07:02:49.467100 systemd-logind[1472]: Removed session 23. Nov 24 07:02:49.531600 kubelet[2674]: E1124 07:02:49.531177 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b7cb7b4d-lbdp7" podUID="8bd2d503-fa71-45be-9f52-ae92f15b3067" Nov 24 07:02:49.533654 kubelet[2674]: E1124 07:02:49.533334 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xq9xq" podUID="c34e73b2-4364-4998-a5a0-398cc36c9e15" Nov 24 07:02:51.527950 kubelet[2674]: E1124 07:02:51.527726 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"