Nov 5 15:54:06.031978 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 13:45:21 -00 2025 Nov 5 15:54:06.032009 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:54:06.032023 kernel: BIOS-provided physical RAM map: Nov 5 15:54:06.032031 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 5 15:54:06.032037 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 5 15:54:06.032044 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 5 15:54:06.032053 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Nov 5 15:54:06.032063 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Nov 5 15:54:06.032071 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 5 15:54:06.032080 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 5 15:54:06.032088 kernel: NX (Execute Disable) protection: active Nov 5 15:54:06.032095 kernel: APIC: Static calls initialized Nov 5 15:54:06.032103 kernel: SMBIOS 2.8 present. Nov 5 15:54:06.032114 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 5 15:54:06.032128 kernel: DMI: Memory slots populated: 1/1 Nov 5 15:54:06.032143 kernel: Hypervisor detected: KVM Nov 5 15:54:06.032160 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 5 15:54:06.032172 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 5 15:54:06.032185 kernel: kvm-clock: using sched offset of 4067925947 cycles Nov 5 15:54:06.032199 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 5 15:54:06.032209 kernel: tsc: Detected 2494.138 MHz processor Nov 5 15:54:06.032218 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 5 15:54:06.032227 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 5 15:54:06.032238 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 5 15:54:06.032247 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 5 15:54:06.032256 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 5 15:54:06.032265 kernel: ACPI: Early table checksum verification disabled Nov 5 15:54:06.032274 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Nov 5 15:54:06.032282 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:54:06.032291 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:54:06.032302 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:54:06.032311 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 5 15:54:06.032319 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:54:06.032328 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:54:06.032336 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:54:06.032344 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:54:06.032353 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 5 15:54:06.032364 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 5 15:54:06.032373 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 5 15:54:06.032381 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 5 15:54:06.032394 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 5 15:54:06.032403 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 5 15:54:06.032414 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 5 15:54:06.032423 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 5 15:54:06.032433 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 5 15:54:06.032442 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Nov 5 15:54:06.032451 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Nov 5 15:54:06.032460 kernel: Zone ranges: Nov 5 15:54:06.032471 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 5 15:54:06.032480 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Nov 5 15:54:06.032489 kernel: Normal empty Nov 5 15:54:06.032498 kernel: Device empty Nov 5 15:54:06.032507 kernel: Movable zone start for each node Nov 5 15:54:06.032516 kernel: Early memory node ranges Nov 5 15:54:06.032524 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 5 15:54:06.032533 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Nov 5 15:54:06.032544 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Nov 5 15:54:06.032553 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 5 15:54:06.032562 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 5 15:54:06.032571 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Nov 5 15:54:06.032580 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 5 15:54:06.032592 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 5 15:54:06.032601 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 5 15:54:06.032614 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 5 15:54:06.032624 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 5 15:54:06.032632 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 5 15:54:06.032643 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 5 15:54:06.032653 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 5 15:54:06.032662 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 5 15:54:06.032671 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 5 15:54:06.032682 kernel: TSC deadline timer available Nov 5 15:54:06.032691 kernel: CPU topo: Max. logical packages: 1 Nov 5 15:54:06.032701 kernel: CPU topo: Max. logical dies: 1 Nov 5 15:54:06.032709 kernel: CPU topo: Max. dies per package: 1 Nov 5 15:54:06.032718 kernel: CPU topo: Max. threads per core: 1 Nov 5 15:54:06.032727 kernel: CPU topo: Num. cores per package: 2 Nov 5 15:54:06.032736 kernel: CPU topo: Num. threads per package: 2 Nov 5 15:54:06.032745 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 5 15:54:06.032757 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 5 15:54:06.032766 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 5 15:54:06.032775 kernel: Booting paravirtualized kernel on KVM Nov 5 15:54:06.032784 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 5 15:54:06.032793 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 5 15:54:06.032802 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 5 15:54:06.034836 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 5 15:54:06.034855 kernel: pcpu-alloc: [0] 0 1 Nov 5 15:54:06.034865 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 5 15:54:06.034876 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:54:06.034886 kernel: random: crng init done Nov 5 15:54:06.034896 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 15:54:06.034905 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 5 15:54:06.034917 kernel: Fallback order for Node 0: 0 Nov 5 15:54:06.034926 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Nov 5 15:54:06.034936 kernel: Policy zone: DMA32 Nov 5 15:54:06.034945 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 15:54:06.034955 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 5 15:54:06.034981 kernel: Kernel/User page tables isolation: enabled Nov 5 15:54:06.034994 kernel: ftrace: allocating 40092 entries in 157 pages Nov 5 15:54:06.035008 kernel: ftrace: allocated 157 pages with 5 groups Nov 5 15:54:06.035020 kernel: Dynamic Preempt: voluntary Nov 5 15:54:06.035029 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 15:54:06.035046 kernel: rcu: RCU event tracing is enabled. Nov 5 15:54:06.035056 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 5 15:54:06.035065 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 15:54:06.035074 kernel: Rude variant of Tasks RCU enabled. Nov 5 15:54:06.035083 kernel: Tracing variant of Tasks RCU enabled. Nov 5 15:54:06.035095 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 15:54:06.035104 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 5 15:54:06.035119 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:54:06.035139 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:54:06.035154 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:54:06.035164 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 5 15:54:06.035173 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 15:54:06.035185 kernel: Console: colour VGA+ 80x25 Nov 5 15:54:06.035194 kernel: printk: legacy console [tty0] enabled Nov 5 15:54:06.035203 kernel: printk: legacy console [ttyS0] enabled Nov 5 15:54:06.035213 kernel: ACPI: Core revision 20240827 Nov 5 15:54:06.035222 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 5 15:54:06.035240 kernel: APIC: Switch to symmetric I/O mode setup Nov 5 15:54:06.035253 kernel: x2apic enabled Nov 5 15:54:06.035263 kernel: APIC: Switched APIC routing to: physical x2apic Nov 5 15:54:06.035272 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 5 15:54:06.035282 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Nov 5 15:54:06.035297 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Nov 5 15:54:06.035307 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 5 15:54:06.035317 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 5 15:54:06.035330 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 5 15:54:06.035340 kernel: Spectre V2 : Mitigation: Retpolines Nov 5 15:54:06.035349 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 5 15:54:06.035359 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 5 15:54:06.035369 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 5 15:54:06.035379 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 5 15:54:06.035388 kernel: MDS: Mitigation: Clear CPU buffers Nov 5 15:54:06.035401 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 5 15:54:06.035411 kernel: active return thunk: its_return_thunk Nov 5 15:54:06.035420 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 5 15:54:06.035430 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 5 15:54:06.035440 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 5 15:54:06.035449 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 5 15:54:06.035459 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 5 15:54:06.035471 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 5 15:54:06.035481 kernel: Freeing SMP alternatives memory: 32K Nov 5 15:54:06.035491 kernel: pid_max: default: 32768 minimum: 301 Nov 5 15:54:06.035500 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 15:54:06.035510 kernel: landlock: Up and running. Nov 5 15:54:06.035520 kernel: SELinux: Initializing. Nov 5 15:54:06.035529 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 5 15:54:06.035539 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 5 15:54:06.037592 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 5 15:54:06.037605 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 5 15:54:06.037615 kernel: signal: max sigframe size: 1776 Nov 5 15:54:06.037625 kernel: rcu: Hierarchical SRCU implementation. Nov 5 15:54:06.037636 kernel: rcu: Max phase no-delay instances is 400. Nov 5 15:54:06.037646 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 15:54:06.037656 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 5 15:54:06.037670 kernel: smp: Bringing up secondary CPUs ... Nov 5 15:54:06.037685 kernel: smpboot: x86: Booting SMP configuration: Nov 5 15:54:06.037696 kernel: .... node #0, CPUs: #1 Nov 5 15:54:06.037705 kernel: smp: Brought up 1 node, 2 CPUs Nov 5 15:54:06.037715 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Nov 5 15:54:06.037726 kernel: Memory: 1989436K/2096612K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 102612K reserved, 0K cma-reserved) Nov 5 15:54:06.037736 kernel: devtmpfs: initialized Nov 5 15:54:06.037748 kernel: x86/mm: Memory block size: 128MB Nov 5 15:54:06.037759 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 15:54:06.037768 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 5 15:54:06.037778 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 15:54:06.037788 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 15:54:06.037797 kernel: audit: initializing netlink subsys (disabled) Nov 5 15:54:06.037834 kernel: audit: type=2000 audit(1762358043.671:1): state=initialized audit_enabled=0 res=1 Nov 5 15:54:06.037847 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 15:54:06.037858 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 5 15:54:06.037867 kernel: cpuidle: using governor menu Nov 5 15:54:06.037877 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 15:54:06.037887 kernel: dca service started, version 1.12.1 Nov 5 15:54:06.037897 kernel: PCI: Using configuration type 1 for base access Nov 5 15:54:06.037906 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 5 15:54:06.037919 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 15:54:06.037929 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 15:54:06.037939 kernel: ACPI: Added _OSI(Module Device) Nov 5 15:54:06.037949 kernel: ACPI: Added _OSI(Processor Device) Nov 5 15:54:06.037958 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 15:54:06.037968 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 15:54:06.037978 kernel: ACPI: Interpreter enabled Nov 5 15:54:06.037990 kernel: ACPI: PM: (supports S0 S5) Nov 5 15:54:06.038000 kernel: ACPI: Using IOAPIC for interrupt routing Nov 5 15:54:06.038010 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 5 15:54:06.038020 kernel: PCI: Using E820 reservations for host bridge windows Nov 5 15:54:06.038029 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 5 15:54:06.038039 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 5 15:54:06.038284 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 5 15:54:06.038429 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 5 15:54:06.038562 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 5 15:54:06.038575 kernel: acpiphp: Slot [3] registered Nov 5 15:54:06.038584 kernel: acpiphp: Slot [4] registered Nov 5 15:54:06.038594 kernel: acpiphp: Slot [5] registered Nov 5 15:54:06.038604 kernel: acpiphp: Slot [6] registered Nov 5 15:54:06.038618 kernel: acpiphp: Slot [7] registered Nov 5 15:54:06.038628 kernel: acpiphp: Slot [8] registered Nov 5 15:54:06.038637 kernel: acpiphp: Slot [9] registered Nov 5 15:54:06.038647 kernel: acpiphp: Slot [10] registered Nov 5 15:54:06.038657 kernel: acpiphp: Slot [11] registered Nov 5 15:54:06.038666 kernel: acpiphp: Slot [12] registered Nov 5 15:54:06.038676 kernel: acpiphp: Slot [13] registered Nov 5 15:54:06.038689 kernel: acpiphp: Slot [14] registered Nov 5 15:54:06.038698 kernel: acpiphp: Slot [15] registered Nov 5 15:54:06.038708 kernel: acpiphp: Slot [16] registered Nov 5 15:54:06.038717 kernel: acpiphp: Slot [17] registered Nov 5 15:54:06.038727 kernel: acpiphp: Slot [18] registered Nov 5 15:54:06.038737 kernel: acpiphp: Slot [19] registered Nov 5 15:54:06.038746 kernel: acpiphp: Slot [20] registered Nov 5 15:54:06.038756 kernel: acpiphp: Slot [21] registered Nov 5 15:54:06.038768 kernel: acpiphp: Slot [22] registered Nov 5 15:54:06.038778 kernel: acpiphp: Slot [23] registered Nov 5 15:54:06.038787 kernel: acpiphp: Slot [24] registered Nov 5 15:54:06.038797 kernel: acpiphp: Slot [25] registered Nov 5 15:54:06.038917 kernel: acpiphp: Slot [26] registered Nov 5 15:54:06.038932 kernel: acpiphp: Slot [27] registered Nov 5 15:54:06.038945 kernel: acpiphp: Slot [28] registered Nov 5 15:54:06.038978 kernel: acpiphp: Slot [29] registered Nov 5 15:54:06.038992 kernel: acpiphp: Slot [30] registered Nov 5 15:54:06.039006 kernel: acpiphp: Slot [31] registered Nov 5 15:54:06.039022 kernel: PCI host bridge to bus 0000:00 Nov 5 15:54:06.039201 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 5 15:54:06.039323 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 5 15:54:06.040885 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 5 15:54:06.041048 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 5 15:54:06.041168 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 5 15:54:06.041286 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 5 15:54:06.041447 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Nov 5 15:54:06.041587 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Nov 5 15:54:06.041732 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Nov 5 15:54:06.042828 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Nov 5 15:54:06.043039 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Nov 5 15:54:06.046038 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Nov 5 15:54:06.046250 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Nov 5 15:54:06.046433 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Nov 5 15:54:06.046588 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Nov 5 15:54:06.046719 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Nov 5 15:54:06.046868 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Nov 5 15:54:06.047072 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 5 15:54:06.047211 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 5 15:54:06.047356 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Nov 5 15:54:06.047486 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Nov 5 15:54:06.047616 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Nov 5 15:54:06.047746 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Nov 5 15:54:06.048642 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Nov 5 15:54:06.048800 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 5 15:54:06.051858 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 5 15:54:06.052081 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Nov 5 15:54:06.052270 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Nov 5 15:54:06.052462 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Nov 5 15:54:06.052694 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 5 15:54:06.053183 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Nov 5 15:54:06.053328 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Nov 5 15:54:06.053462 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 5 15:54:06.053616 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Nov 5 15:54:06.053750 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Nov 5 15:54:06.053913 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Nov 5 15:54:06.054193 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 5 15:54:06.054343 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 5 15:54:06.054476 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Nov 5 15:54:06.054620 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Nov 5 15:54:06.054762 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Nov 5 15:54:06.054919 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 5 15:54:06.055088 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Nov 5 15:54:06.055223 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Nov 5 15:54:06.055352 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Nov 5 15:54:06.055500 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Nov 5 15:54:06.055664 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Nov 5 15:54:06.056143 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 5 15:54:06.056163 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 5 15:54:06.056536 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 5 15:54:06.056548 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 5 15:54:06.056558 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 5 15:54:06.056572 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 5 15:54:06.056588 kernel: iommu: Default domain type: Translated Nov 5 15:54:06.056598 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 5 15:54:06.056608 kernel: PCI: Using ACPI for IRQ routing Nov 5 15:54:06.056618 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 5 15:54:06.056628 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 5 15:54:06.056638 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Nov 5 15:54:06.056873 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 5 15:54:06.057026 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 5 15:54:06.058501 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 5 15:54:06.058527 kernel: vgaarb: loaded Nov 5 15:54:06.058540 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 5 15:54:06.058550 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 5 15:54:06.058560 kernel: clocksource: Switched to clocksource kvm-clock Nov 5 15:54:06.058570 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 15:54:06.058587 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 15:54:06.058596 kernel: pnp: PnP ACPI init Nov 5 15:54:06.058606 kernel: pnp: PnP ACPI: found 4 devices Nov 5 15:54:06.058617 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 5 15:54:06.058627 kernel: NET: Registered PF_INET protocol family Nov 5 15:54:06.058637 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 15:54:06.058647 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 5 15:54:06.058660 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 15:54:06.058669 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 5 15:54:06.058679 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 5 15:54:06.058689 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 5 15:54:06.058698 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 5 15:54:06.058708 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 5 15:54:06.058718 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 15:54:06.058731 kernel: NET: Registered PF_XDP protocol family Nov 5 15:54:06.058888 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 5 15:54:06.059043 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 5 15:54:06.059204 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 5 15:54:06.059325 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 5 15:54:06.059517 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 5 15:54:06.059665 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 5 15:54:06.061885 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 5 15:54:06.061922 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 5 15:54:06.062102 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 26753 usecs Nov 5 15:54:06.062117 kernel: PCI: CLS 0 bytes, default 64 Nov 5 15:54:06.062128 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 5 15:54:06.062138 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Nov 5 15:54:06.062154 kernel: Initialise system trusted keyrings Nov 5 15:54:06.062165 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 5 15:54:06.062175 kernel: Key type asymmetric registered Nov 5 15:54:06.062184 kernel: Asymmetric key parser 'x509' registered Nov 5 15:54:06.062198 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 5 15:54:06.062212 kernel: io scheduler mq-deadline registered Nov 5 15:54:06.062228 kernel: io scheduler kyber registered Nov 5 15:54:06.062249 kernel: io scheduler bfq registered Nov 5 15:54:06.062264 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 5 15:54:06.062276 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 5 15:54:06.062286 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 5 15:54:06.062296 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 5 15:54:06.062306 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 15:54:06.062316 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 5 15:54:06.062326 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 5 15:54:06.062339 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 5 15:54:06.062348 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 5 15:54:06.062358 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 5 15:54:06.062519 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 5 15:54:06.062648 kernel: rtc_cmos 00:03: registered as rtc0 Nov 5 15:54:06.062772 kernel: rtc_cmos 00:03: setting system clock to 2025-11-05T15:54:04 UTC (1762358044) Nov 5 15:54:06.062939 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 5 15:54:06.062953 kernel: intel_pstate: CPU model not supported Nov 5 15:54:06.062976 kernel: NET: Registered PF_INET6 protocol family Nov 5 15:54:06.062990 kernel: Segment Routing with IPv6 Nov 5 15:54:06.063004 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 15:54:06.063014 kernel: NET: Registered PF_PACKET protocol family Nov 5 15:54:06.063023 kernel: Key type dns_resolver registered Nov 5 15:54:06.063039 kernel: IPI shorthand broadcast: enabled Nov 5 15:54:06.063049 kernel: sched_clock: Marking stable (1262003356, 148148296)->(1542988739, -132837087) Nov 5 15:54:06.063058 kernel: registered taskstats version 1 Nov 5 15:54:06.063068 kernel: Loading compiled-in X.509 certificates Nov 5 15:54:06.063078 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 9f02cc8d588ce542f03b0da66dde47a90a145382' Nov 5 15:54:06.063088 kernel: Demotion targets for Node 0: null Nov 5 15:54:06.063098 kernel: Key type .fscrypt registered Nov 5 15:54:06.063110 kernel: Key type fscrypt-provisioning registered Nov 5 15:54:06.063137 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 15:54:06.063149 kernel: ima: Allocated hash algorithm: sha1 Nov 5 15:54:06.063159 kernel: ima: No architecture policies found Nov 5 15:54:06.063169 kernel: clk: Disabling unused clocks Nov 5 15:54:06.063179 kernel: Freeing unused kernel image (initmem) memory: 15964K Nov 5 15:54:06.063190 kernel: Write protecting the kernel read-only data: 40960k Nov 5 15:54:06.063203 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 5 15:54:06.063213 kernel: Run /init as init process Nov 5 15:54:06.063223 kernel: with arguments: Nov 5 15:54:06.063234 kernel: /init Nov 5 15:54:06.063244 kernel: with environment: Nov 5 15:54:06.063254 kernel: HOME=/ Nov 5 15:54:06.063263 kernel: TERM=linux Nov 5 15:54:06.063274 kernel: SCSI subsystem initialized Nov 5 15:54:06.063287 kernel: libata version 3.00 loaded. Nov 5 15:54:06.063438 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 5 15:54:06.063593 kernel: scsi host0: ata_piix Nov 5 15:54:06.063733 kernel: scsi host1: ata_piix Nov 5 15:54:06.063747 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Nov 5 15:54:06.063762 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Nov 5 15:54:06.063775 kernel: ACPI: bus type USB registered Nov 5 15:54:06.063786 kernel: usbcore: registered new interface driver usbfs Nov 5 15:54:06.063796 kernel: usbcore: registered new interface driver hub Nov 5 15:54:06.065854 kernel: usbcore: registered new device driver usb Nov 5 15:54:06.066159 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 5 15:54:06.066379 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 5 15:54:06.066596 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 5 15:54:06.066834 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Nov 5 15:54:06.067102 kernel: hub 1-0:1.0: USB hub found Nov 5 15:54:06.067302 kernel: hub 1-0:1.0: 2 ports detected Nov 5 15:54:06.067463 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Nov 5 15:54:06.067597 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 5 15:54:06.067611 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 5 15:54:06.067622 kernel: GPT:16515071 != 125829119 Nov 5 15:54:06.067634 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 5 15:54:06.067648 kernel: GPT:16515071 != 125829119 Nov 5 15:54:06.067658 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 5 15:54:06.067669 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 5 15:54:06.070250 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Nov 5 15:54:06.070448 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Nov 5 15:54:06.070595 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Nov 5 15:54:06.070750 kernel: scsi host2: Virtio SCSI HBA Nov 5 15:54:06.070765 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 15:54:06.070777 kernel: device-mapper: uevent: version 1.0.3 Nov 5 15:54:06.070788 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 15:54:06.070798 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 5 15:54:06.070822 kernel: raid6: avx2x4 gen() 17222 MB/s Nov 5 15:54:06.070833 kernel: raid6: avx2x2 gen() 17122 MB/s Nov 5 15:54:06.070847 kernel: raid6: avx2x1 gen() 12893 MB/s Nov 5 15:54:06.070857 kernel: raid6: using algorithm avx2x4 gen() 17222 MB/s Nov 5 15:54:06.070868 kernel: raid6: .... xor() 6208 MB/s, rmw enabled Nov 5 15:54:06.070878 kernel: raid6: using avx2x2 recovery algorithm Nov 5 15:54:06.070889 kernel: xor: automatically using best checksumming function avx Nov 5 15:54:06.070899 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 15:54:06.070909 kernel: BTRFS: device fsid a4c7be9c-39f6-471d-8a4c-d50144c6bf01 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (162) Nov 5 15:54:06.070924 kernel: BTRFS info (device dm-0): first mount of filesystem a4c7be9c-39f6-471d-8a4c-d50144c6bf01 Nov 5 15:54:06.070934 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:54:06.070945 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 15:54:06.070955 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 15:54:06.070982 kernel: loop: module loaded Nov 5 15:54:06.070997 kernel: loop0: detected capacity change from 0 to 100120 Nov 5 15:54:06.071012 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 15:54:06.071028 systemd[1]: Successfully made /usr/ read-only. Nov 5 15:54:06.071048 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:54:06.071065 systemd[1]: Detected virtualization kvm. Nov 5 15:54:06.071081 systemd[1]: Detected architecture x86-64. Nov 5 15:54:06.071099 systemd[1]: Running in initrd. Nov 5 15:54:06.071114 systemd[1]: No hostname configured, using default hostname. Nov 5 15:54:06.071128 systemd[1]: Hostname set to . Nov 5 15:54:06.071138 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 15:54:06.071149 systemd[1]: Queued start job for default target initrd.target. Nov 5 15:54:06.071159 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:54:06.071171 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:54:06.071181 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:54:06.071195 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 15:54:06.071206 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:54:06.071217 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 15:54:06.071228 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 15:54:06.071239 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:54:06.071252 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:54:06.071266 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:54:06.071277 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:54:06.071287 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:54:06.071298 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:54:06.071309 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:54:06.071320 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:54:06.071331 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:54:06.071344 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 15:54:06.071355 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 15:54:06.071366 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:54:06.071376 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:54:06.071387 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:54:06.071397 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:54:06.071411 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 15:54:06.071421 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 15:54:06.071432 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:54:06.071443 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 15:54:06.071455 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 15:54:06.071466 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 15:54:06.071476 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:54:06.071490 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:54:06.071501 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:54:06.071512 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 15:54:06.071523 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:54:06.071537 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 15:54:06.071585 systemd-journald[298]: Collecting audit messages is disabled. Nov 5 15:54:06.071610 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 15:54:06.071624 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 15:54:06.071635 kernel: Bridge firewalling registered Nov 5 15:54:06.071645 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:54:06.071657 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:54:06.071669 systemd-journald[298]: Journal started Nov 5 15:54:06.071693 systemd-journald[298]: Runtime Journal (/run/log/journal/5b6fb51843c4477cb5f33664a5618159) is 4.9M, max 39.2M, 34.3M free. Nov 5 15:54:06.059394 systemd-modules-load[299]: Inserted module 'br_netfilter' Nov 5 15:54:06.076823 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:54:06.131310 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:54:06.132394 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:54:06.133254 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:54:06.137640 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 15:54:06.140003 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:54:06.142288 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:54:06.146320 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:54:06.176986 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:54:06.179576 systemd-tmpfiles[322]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 15:54:06.190424 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:54:06.192922 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:54:06.199042 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 15:54:06.230427 dracut-cmdline[338]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:54:06.248007 systemd-resolved[320]: Positive Trust Anchors: Nov 5 15:54:06.248030 systemd-resolved[320]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:54:06.248035 systemd-resolved[320]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:54:06.248085 systemd-resolved[320]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:54:06.283580 systemd-resolved[320]: Defaulting to hostname 'linux'. Nov 5 15:54:06.285218 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:54:06.287724 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:54:06.372839 kernel: Loading iSCSI transport class v2.0-870. Nov 5 15:54:06.389844 kernel: iscsi: registered transport (tcp) Nov 5 15:54:06.417151 kernel: iscsi: registered transport (qla4xxx) Nov 5 15:54:06.417237 kernel: QLogic iSCSI HBA Driver Nov 5 15:54:06.451463 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:54:06.472405 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:54:06.475156 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:54:06.537159 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 15:54:06.540431 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 15:54:06.543162 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 15:54:06.585064 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:54:06.589560 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:54:06.629978 systemd-udevd[574]: Using default interface naming scheme 'v257'. Nov 5 15:54:06.643685 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:54:06.648588 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 15:54:06.683412 dracut-pre-trigger[642]: rd.md=0: removing MD RAID activation Nov 5 15:54:06.688612 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:54:06.692732 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:54:06.724886 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:54:06.729019 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:54:06.756527 systemd-networkd[688]: lo: Link UP Nov 5 15:54:06.756538 systemd-networkd[688]: lo: Gained carrier Nov 5 15:54:06.757330 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:54:06.757861 systemd[1]: Reached target network.target - Network. Nov 5 15:54:06.815657 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:54:06.820309 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 15:54:06.943439 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 5 15:54:06.955249 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 5 15:54:06.965237 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 5 15:54:06.967516 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 15:54:06.995874 disk-uuid[744]: Primary Header is updated. Nov 5 15:54:06.995874 disk-uuid[744]: Secondary Entries is updated. Nov 5 15:54:06.995874 disk-uuid[744]: Secondary Header is updated. Nov 5 15:54:07.008675 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 15:54:07.013087 kernel: cryptd: max_cpu_qlen set to 1000 Nov 5 15:54:07.075853 kernel: AES CTR mode by8 optimization enabled Nov 5 15:54:07.079500 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:54:07.080564 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:54:07.082477 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:54:07.089833 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 5 15:54:07.095287 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:54:07.121199 systemd-networkd[688]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Nov 5 15:54:07.121209 systemd-networkd[688]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 5 15:54:07.122015 systemd-networkd[688]: eth0: Link UP Nov 5 15:54:07.125747 systemd-networkd[688]: eth0: Gained carrier Nov 5 15:54:07.125770 systemd-networkd[688]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Nov 5 15:54:07.140910 systemd-networkd[688]: eth0: DHCPv4 address 134.199.212.97/20, gateway 134.199.208.1 acquired from 169.254.169.253 Nov 5 15:54:07.154616 systemd-networkd[688]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:54:07.154626 systemd-networkd[688]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 15:54:07.155935 systemd-networkd[688]: eth1: Link UP Nov 5 15:54:07.156175 systemd-networkd[688]: eth1: Gained carrier Nov 5 15:54:07.156192 systemd-networkd[688]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:54:07.178992 systemd-networkd[688]: eth1: DHCPv4 address 10.124.0.32/20 acquired from 169.254.169.253 Nov 5 15:54:07.208633 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 15:54:07.241674 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:54:07.245178 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:54:07.245962 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:54:07.247182 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:54:07.251084 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 15:54:07.285188 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:54:08.061920 disk-uuid[745]: Warning: The kernel is still using the old partition table. Nov 5 15:54:08.061920 disk-uuid[745]: The new table will be used at the next reboot or after you Nov 5 15:54:08.061920 disk-uuid[745]: run partprobe(8) or kpartx(8) Nov 5 15:54:08.061920 disk-uuid[745]: The operation has completed successfully. Nov 5 15:54:08.074759 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 15:54:08.074919 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 15:54:08.078118 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 15:54:08.117853 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (837) Nov 5 15:54:08.121403 kernel: BTRFS info (device vda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:54:08.121505 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:54:08.125049 kernel: BTRFS info (device vda6): turning on async discard Nov 5 15:54:08.125150 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 15:54:08.135857 kernel: BTRFS info (device vda6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:54:08.136752 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 15:54:08.140040 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 15:54:08.390131 ignition[856]: Ignition 2.22.0 Nov 5 15:54:08.390148 ignition[856]: Stage: fetch-offline Nov 5 15:54:08.390222 ignition[856]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:54:08.390238 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 5 15:54:08.393157 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:54:08.390401 ignition[856]: parsed url from cmdline: "" Nov 5 15:54:08.395194 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 5 15:54:08.390406 ignition[856]: no config URL provided Nov 5 15:54:08.390414 ignition[856]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 15:54:08.390428 ignition[856]: no config at "/usr/lib/ignition/user.ign" Nov 5 15:54:08.390436 ignition[856]: failed to fetch config: resource requires networking Nov 5 15:54:08.390700 ignition[856]: Ignition finished successfully Nov 5 15:54:08.441102 ignition[864]: Ignition 2.22.0 Nov 5 15:54:08.441118 ignition[864]: Stage: fetch Nov 5 15:54:08.441336 ignition[864]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:54:08.441350 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 5 15:54:08.441472 ignition[864]: parsed url from cmdline: "" Nov 5 15:54:08.441478 ignition[864]: no config URL provided Nov 5 15:54:08.441487 ignition[864]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 15:54:08.441496 ignition[864]: no config at "/usr/lib/ignition/user.ign" Nov 5 15:54:08.441531 ignition[864]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 5 15:54:08.470312 ignition[864]: GET result: OK Nov 5 15:54:08.471235 ignition[864]: parsing config with SHA512: e14e15be280cc5f46ae44694c254f86feba4f5954b0207fe67f81a025cd4a83ef0d7adbe125fce70a2373b1a7d36d45099cba622f9899ecc553558c2904515c4 Nov 5 15:54:08.477653 unknown[864]: fetched base config from "system" Nov 5 15:54:08.477671 unknown[864]: fetched base config from "system" Nov 5 15:54:08.478285 ignition[864]: fetch: fetch complete Nov 5 15:54:08.477679 unknown[864]: fetched user config from "digitalocean" Nov 5 15:54:08.478292 ignition[864]: fetch: fetch passed Nov 5 15:54:08.478354 ignition[864]: Ignition finished successfully Nov 5 15:54:08.482253 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 5 15:54:08.484403 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 15:54:08.522683 ignition[871]: Ignition 2.22.0 Nov 5 15:54:08.523499 ignition[871]: Stage: kargs Nov 5 15:54:08.524216 ignition[871]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:54:08.524724 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 5 15:54:08.526064 ignition[871]: kargs: kargs passed Nov 5 15:54:08.526170 ignition[871]: Ignition finished successfully Nov 5 15:54:08.527564 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 15:54:08.529847 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 15:54:08.567463 ignition[877]: Ignition 2.22.0 Nov 5 15:54:08.567477 ignition[877]: Stage: disks Nov 5 15:54:08.567647 ignition[877]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:54:08.567658 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 5 15:54:08.570713 ignition[877]: disks: disks passed Nov 5 15:54:08.570783 ignition[877]: Ignition finished successfully Nov 5 15:54:08.572821 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 15:54:08.577998 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 15:54:08.578803 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 15:54:08.579800 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:54:08.580716 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:54:08.581556 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:54:08.583881 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 15:54:08.627897 systemd-fsck[886]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 5 15:54:08.631165 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 15:54:08.633888 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 15:54:08.759867 kernel: EXT4-fs (vda9): mounted filesystem f3db699e-c9e0-4f6b-8c2b-aa40a78cd116 r/w with ordered data mode. Quota mode: none. Nov 5 15:54:08.760107 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 15:54:08.761162 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 15:54:08.763329 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:54:08.765618 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 15:54:08.769275 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Nov 5 15:54:08.778225 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 5 15:54:08.782482 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 15:54:08.784286 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:54:08.788177 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 15:54:08.793002 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 15:54:08.818846 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (894) Nov 5 15:54:08.822861 kernel: BTRFS info (device vda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:54:08.844499 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:54:08.865789 kernel: BTRFS info (device vda6): turning on async discard Nov 5 15:54:08.865904 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 15:54:08.870666 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:54:08.885044 coreos-metadata[896]: Nov 05 15:54:08.880 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 5 15:54:08.894826 coreos-metadata[896]: Nov 05 15:54:08.893 INFO Fetch successful Nov 5 15:54:08.897916 coreos-metadata[897]: Nov 05 15:54:08.897 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 5 15:54:08.899555 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Nov 5 15:54:08.900137 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Nov 5 15:54:08.906604 initrd-setup-root[926]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 15:54:08.908831 coreos-metadata[897]: Nov 05 15:54:08.908 INFO Fetch successful Nov 5 15:54:08.913181 initrd-setup-root[933]: cut: /sysroot/etc/group: No such file or directory Nov 5 15:54:08.915115 coreos-metadata[897]: Nov 05 15:54:08.915 INFO wrote hostname ci-4487.0.1-e-b20d930803 to /sysroot/etc/hostname Nov 5 15:54:08.917907 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 5 15:54:08.921606 initrd-setup-root[941]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 15:54:08.927673 initrd-setup-root[948]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 15:54:08.958103 systemd-networkd[688]: eth0: Gained IPv6LL Nov 5 15:54:09.064643 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 15:54:09.067149 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 15:54:09.068544 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 15:54:09.094862 kernel: BTRFS info (device vda6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:54:09.103770 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 15:54:09.115272 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 15:54:09.139053 ignition[1017]: INFO : Ignition 2.22.0 Nov 5 15:54:09.140069 ignition[1017]: INFO : Stage: mount Nov 5 15:54:09.141941 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:54:09.141941 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 5 15:54:09.144212 ignition[1017]: INFO : mount: mount passed Nov 5 15:54:09.145016 ignition[1017]: INFO : Ignition finished successfully Nov 5 15:54:09.147438 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 15:54:09.149584 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 15:54:09.176199 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:54:09.209842 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1028) Nov 5 15:54:09.213713 kernel: BTRFS info (device vda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:54:09.213829 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:54:09.214111 systemd-networkd[688]: eth1: Gained IPv6LL Nov 5 15:54:09.218270 kernel: BTRFS info (device vda6): turning on async discard Nov 5 15:54:09.218370 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 15:54:09.220775 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:54:09.274125 ignition[1045]: INFO : Ignition 2.22.0 Nov 5 15:54:09.274125 ignition[1045]: INFO : Stage: files Nov 5 15:54:09.275508 ignition[1045]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:54:09.275508 ignition[1045]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 5 15:54:09.276924 ignition[1045]: DEBUG : files: compiled without relabeling support, skipping Nov 5 15:54:09.278458 ignition[1045]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 15:54:09.278458 ignition[1045]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 15:54:09.283870 ignition[1045]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 15:54:09.284877 ignition[1045]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 15:54:09.285896 unknown[1045]: wrote ssh authorized keys file for user: core Nov 5 15:54:09.286793 ignition[1045]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 15:54:09.288479 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 5 15:54:09.289516 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 5 15:54:09.397415 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 15:54:09.487346 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 5 15:54:09.488598 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 5 15:54:09.488598 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 15:54:09.488598 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:54:09.488598 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:54:09.488598 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:54:09.488598 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:54:09.488598 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:54:09.488598 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:54:09.496617 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:54:09.496617 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:54:09.496617 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 5 15:54:09.496617 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 5 15:54:09.496617 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 5 15:54:09.496617 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 5 15:54:10.810344 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 5 15:54:11.241705 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 5 15:54:11.241705 ignition[1045]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 5 15:54:11.244306 ignition[1045]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:54:11.246112 ignition[1045]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:54:11.246112 ignition[1045]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 5 15:54:11.246112 ignition[1045]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 5 15:54:11.246112 ignition[1045]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 15:54:11.250567 ignition[1045]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:54:11.250567 ignition[1045]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:54:11.250567 ignition[1045]: INFO : files: files passed Nov 5 15:54:11.250567 ignition[1045]: INFO : Ignition finished successfully Nov 5 15:54:11.250478 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 15:54:11.254128 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 15:54:11.258049 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 15:54:11.276269 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 15:54:11.276502 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 15:54:11.290407 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:54:11.290407 initrd-setup-root-after-ignition[1077]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:54:11.292721 initrd-setup-root-after-ignition[1081]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:54:11.296341 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:54:11.298225 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 15:54:11.301414 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 15:54:11.400886 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 15:54:11.401098 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 15:54:11.402108 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 15:54:11.402831 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 15:54:11.404264 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 15:54:11.406104 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 15:54:11.448975 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:54:11.452355 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 15:54:11.484547 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:54:11.485846 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:54:11.486496 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:54:11.487342 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 15:54:11.488654 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 15:54:11.488907 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:54:11.490177 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 15:54:11.491314 systemd[1]: Stopped target basic.target - Basic System. Nov 5 15:54:11.492347 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 15:54:11.493344 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:54:11.494266 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 15:54:11.495402 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:54:11.496394 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 15:54:11.497419 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:54:11.498457 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 15:54:11.499581 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 15:54:11.500952 systemd[1]: Stopped target swap.target - Swaps. Nov 5 15:54:11.501819 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 15:54:11.501990 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:54:11.503322 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:54:11.504465 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:54:11.505555 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 15:54:11.505867 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:54:11.506646 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 15:54:11.506823 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 15:54:11.508471 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 15:54:11.508747 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:54:11.509715 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 15:54:11.509901 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 15:54:11.510485 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 5 15:54:11.510605 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 5 15:54:11.513935 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 15:54:11.517104 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 15:54:11.517582 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 15:54:11.517773 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:54:11.519082 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 15:54:11.519243 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:54:11.521458 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 15:54:11.523641 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:54:11.532476 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 15:54:11.532588 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 15:54:11.556759 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 15:54:11.565417 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 15:54:11.565609 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 15:54:11.569194 ignition[1101]: INFO : Ignition 2.22.0 Nov 5 15:54:11.569194 ignition[1101]: INFO : Stage: umount Nov 5 15:54:11.570481 ignition[1101]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:54:11.570481 ignition[1101]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 5 15:54:11.576869 ignition[1101]: INFO : umount: umount passed Nov 5 15:54:11.576869 ignition[1101]: INFO : Ignition finished successfully Nov 5 15:54:11.573551 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 15:54:11.573759 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 15:54:11.577341 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 15:54:11.577576 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 15:54:11.579167 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 15:54:11.579287 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 15:54:11.580055 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 5 15:54:11.580150 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 5 15:54:11.581085 systemd[1]: Stopped target network.target - Network. Nov 5 15:54:11.582018 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 15:54:11.582123 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:54:11.583139 systemd[1]: Stopped target paths.target - Path Units. Nov 5 15:54:11.584060 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 15:54:11.584159 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:54:11.585109 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 15:54:11.586120 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 15:54:11.587418 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 15:54:11.587510 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:54:11.588453 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 15:54:11.588523 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:54:11.589514 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 15:54:11.589629 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 15:54:11.590522 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 15:54:11.590605 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 15:54:11.591518 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 15:54:11.591602 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 15:54:11.592769 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 15:54:11.594384 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 15:54:11.606260 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 15:54:11.606417 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 15:54:11.611591 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 15:54:11.611751 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 15:54:11.616449 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 15:54:11.617046 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 15:54:11.617105 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:54:11.619515 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 15:54:11.620935 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 15:54:11.621043 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:54:11.623320 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 15:54:11.623441 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:54:11.625275 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 15:54:11.625358 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 15:54:11.626026 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:54:11.648966 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 15:54:11.649137 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:54:11.651055 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 15:54:11.651132 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 15:54:11.653562 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 15:54:11.653627 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:54:11.659417 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 15:54:11.659548 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:54:11.661437 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 15:54:11.661564 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 15:54:11.662619 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 15:54:11.662732 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:54:11.665571 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 15:54:11.667637 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 15:54:11.667731 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:54:11.670308 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 15:54:11.670425 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:54:11.672319 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 5 15:54:11.672428 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:54:11.673162 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 15:54:11.673249 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:54:11.674045 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:54:11.674129 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:54:11.696197 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 15:54:11.696466 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 15:54:11.701851 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 15:54:11.701988 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 15:54:11.704387 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 15:54:11.706317 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 15:54:11.726612 systemd[1]: Switching root. Nov 5 15:54:11.768356 systemd-journald[298]: Journal stopped Nov 5 15:54:13.109056 systemd-journald[298]: Received SIGTERM from PID 1 (systemd). Nov 5 15:54:13.109180 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 15:54:13.109211 kernel: SELinux: policy capability open_perms=1 Nov 5 15:54:13.109225 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 15:54:13.109255 kernel: SELinux: policy capability always_check_network=0 Nov 5 15:54:13.109268 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 15:54:13.109282 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 15:54:13.109299 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 15:54:13.109312 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 15:54:13.109329 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 15:54:13.109348 kernel: audit: type=1403 audit(1762358051.933:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 15:54:13.109365 systemd[1]: Successfully loaded SELinux policy in 77.056ms. Nov 5 15:54:13.109383 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.501ms. Nov 5 15:54:13.109412 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:54:13.109427 systemd[1]: Detected virtualization kvm. Nov 5 15:54:13.109441 systemd[1]: Detected architecture x86-64. Nov 5 15:54:13.109454 systemd[1]: Detected first boot. Nov 5 15:54:13.109474 systemd[1]: Hostname set to . Nov 5 15:54:13.109489 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 15:54:13.109502 zram_generator::config[1146]: No configuration found. Nov 5 15:54:13.109516 kernel: Guest personality initialized and is inactive Nov 5 15:54:13.109533 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 5 15:54:13.109545 kernel: Initialized host personality Nov 5 15:54:13.109559 kernel: NET: Registered PF_VSOCK protocol family Nov 5 15:54:13.109579 systemd[1]: Populated /etc with preset unit settings. Nov 5 15:54:13.109597 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 15:54:13.109611 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 15:54:13.109624 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 15:54:13.109642 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 15:54:13.109659 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 15:54:13.109673 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 15:54:13.109694 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 15:54:13.109709 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 15:54:13.109723 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 15:54:13.109738 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 15:54:13.109751 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 15:54:13.109765 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:54:13.109779 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:54:13.109799 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 15:54:13.109831 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 15:54:13.109846 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 15:54:13.109860 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:54:13.109882 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 5 15:54:13.109895 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:54:13.109909 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:54:13.109923 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 15:54:13.109937 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 15:54:13.109950 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 15:54:13.109965 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 15:54:13.109986 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:54:13.110000 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:54:13.110015 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:54:13.110028 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:54:13.110042 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 15:54:13.110059 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 15:54:13.110081 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 15:54:13.110096 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:54:13.110118 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:54:13.110132 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:54:13.110144 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 15:54:13.110158 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 15:54:13.110173 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 15:54:13.110187 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 15:54:13.110200 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:54:13.110223 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 15:54:13.110237 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 15:54:13.110250 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 15:54:13.110264 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 15:54:13.110278 systemd[1]: Reached target machines.target - Containers. Nov 5 15:54:13.110291 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 15:54:13.110305 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:54:13.110325 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:54:13.110339 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 15:54:13.110354 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:54:13.110367 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:54:13.110380 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:54:13.110394 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 15:54:13.110414 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:54:13.110429 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 15:54:13.110443 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 15:54:13.110457 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 15:54:13.110473 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 15:54:13.110486 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 15:54:13.110499 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:54:13.110522 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:54:13.110540 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:54:13.110553 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:54:13.110568 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 15:54:13.110588 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 15:54:13.110602 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:54:13.110615 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:54:13.110629 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 15:54:13.110643 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 15:54:13.110656 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 15:54:13.110671 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 15:54:13.110690 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 15:54:13.110703 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 15:54:13.110717 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:54:13.110739 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 15:54:13.110759 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 15:54:13.110781 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:54:13.110800 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:54:13.120984 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:54:13.121038 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:54:13.121062 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:54:13.121083 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:54:13.121128 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:54:13.121151 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 15:54:13.121171 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:54:13.121193 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:54:13.121212 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 15:54:13.121232 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 15:54:13.121255 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 15:54:13.121293 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 15:54:13.121321 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 15:54:13.121344 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:54:13.121367 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 15:54:13.121386 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:54:13.121491 systemd-journald[1212]: Collecting audit messages is disabled. Nov 5 15:54:13.121578 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 15:54:13.121608 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:54:13.121628 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 15:54:13.121648 kernel: fuse: init (API version 7.41) Nov 5 15:54:13.121672 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 15:54:13.121696 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:54:13.121723 systemd-journald[1212]: Journal started Nov 5 15:54:13.121779 systemd-journald[1212]: Runtime Journal (/run/log/journal/5b6fb51843c4477cb5f33664a5618159) is 4.9M, max 39.2M, 34.3M free. Nov 5 15:54:12.686134 systemd[1]: Queued start job for default target multi-user.target. Nov 5 15:54:12.714190 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 5 15:54:12.715034 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 15:54:13.134967 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:54:13.126106 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 15:54:13.128038 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 15:54:13.128251 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 15:54:13.131268 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:54:13.135172 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 15:54:13.186055 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Nov 5 15:54:13.186087 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Nov 5 15:54:13.205288 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:54:13.210229 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 15:54:13.212798 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 15:54:13.227838 kernel: loop1: detected capacity change from 0 to 128048 Nov 5 15:54:13.229587 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 15:54:13.240745 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:54:13.246439 systemd-journald[1212]: Time spent on flushing to /var/log/journal/5b6fb51843c4477cb5f33664a5618159 is 57.112ms for 997 entries. Nov 5 15:54:13.246439 systemd-journald[1212]: System Journal (/var/log/journal/5b6fb51843c4477cb5f33664a5618159) is 8M, max 163.5M, 155.5M free. Nov 5 15:54:13.310593 systemd-journald[1212]: Received client request to flush runtime journal. Nov 5 15:54:13.310657 kernel: ACPI: bus type drm_connector registered Nov 5 15:54:13.310689 kernel: loop2: detected capacity change from 0 to 110984 Nov 5 15:54:13.256783 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:54:13.261801 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:54:13.278035 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 15:54:13.283855 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 15:54:13.316404 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 15:54:13.329778 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 15:54:13.349849 kernel: loop3: detected capacity change from 0 to 224512 Nov 5 15:54:13.378032 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 15:54:13.383139 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:54:13.389987 kernel: loop4: detected capacity change from 0 to 8 Nov 5 15:54:13.389353 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:54:13.408999 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:54:13.413850 kernel: loop5: detected capacity change from 0 to 128048 Nov 5 15:54:13.424285 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 15:54:13.435849 kernel: loop6: detected capacity change from 0 to 110984 Nov 5 15:54:13.447005 kernel: loop7: detected capacity change from 0 to 224512 Nov 5 15:54:13.456039 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Nov 5 15:54:13.456075 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Nov 5 15:54:13.469846 kernel: loop1: detected capacity change from 0 to 8 Nov 5 15:54:13.470702 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:54:13.476256 (sd-merge)[1292]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-digitalocean.raw'. Nov 5 15:54:13.487468 (sd-merge)[1292]: Merged extensions into '/usr'. Nov 5 15:54:13.502085 systemd[1]: Reload requested from client PID 1242 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 15:54:13.502118 systemd[1]: Reloading... Nov 5 15:54:13.645897 zram_generator::config[1324]: No configuration found. Nov 5 15:54:13.707996 systemd-resolved[1288]: Positive Trust Anchors: Nov 5 15:54:13.708017 systemd-resolved[1288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:54:13.708022 systemd-resolved[1288]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:54:13.708060 systemd-resolved[1288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:54:13.731046 systemd-resolved[1288]: Using system hostname 'ci-4487.0.1-e-b20d930803'. Nov 5 15:54:13.949267 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 15:54:13.950165 systemd[1]: Reloading finished in 447 ms. Nov 5 15:54:13.975910 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 15:54:13.977195 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:54:13.978267 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 15:54:13.983357 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:54:13.986903 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 15:54:14.003168 systemd[1]: Starting ensure-sysext.service... Nov 5 15:54:14.011262 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:54:14.030077 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 15:54:14.057215 systemd[1]: Reload requested from client PID 1371 ('systemctl') (unit ensure-sysext.service)... Nov 5 15:54:14.057247 systemd[1]: Reloading... Nov 5 15:54:14.067052 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 15:54:14.067139 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 15:54:14.067695 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 15:54:14.068261 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 15:54:14.069872 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 15:54:14.070287 systemd-tmpfiles[1372]: ACLs are not supported, ignoring. Nov 5 15:54:14.070375 systemd-tmpfiles[1372]: ACLs are not supported, ignoring. Nov 5 15:54:14.079538 systemd-tmpfiles[1372]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:54:14.079557 systemd-tmpfiles[1372]: Skipping /boot Nov 5 15:54:14.098357 systemd-tmpfiles[1372]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:54:14.098375 systemd-tmpfiles[1372]: Skipping /boot Nov 5 15:54:14.265839 zram_generator::config[1418]: No configuration found. Nov 5 15:54:14.501670 systemd[1]: Reloading finished in 443 ms. Nov 5 15:54:14.523623 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 15:54:14.537373 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:54:14.549424 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:54:14.552006 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 15:54:14.556250 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 15:54:14.561577 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 15:54:14.567416 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:54:14.574250 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 15:54:14.577526 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:54:14.577755 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:54:14.585394 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:54:14.596428 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:54:14.601989 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:54:14.602867 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:54:14.603035 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:54:14.603139 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:54:14.613823 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:54:14.614871 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:54:14.615364 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:54:14.615459 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:54:14.615554 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:54:14.623210 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:54:14.623534 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:54:14.641844 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:54:14.642512 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:54:14.642656 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:54:14.642801 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:54:14.651792 systemd[1]: Finished ensure-sysext.service. Nov 5 15:54:14.664779 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 5 15:54:14.706609 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 15:54:14.712227 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:54:14.712465 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:54:14.716133 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:54:14.717572 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:54:14.723312 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:54:14.742861 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:54:14.744267 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:54:14.745539 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:54:14.746749 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:54:14.749291 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:54:14.801247 systemd-udevd[1452]: Using default interface naming scheme 'v257'. Nov 5 15:54:14.827975 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 15:54:14.845894 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 15:54:14.847205 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 15:54:14.856945 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:54:14.858373 augenrules[1488]: No rules Nov 5 15:54:14.865064 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:54:14.866488 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:54:14.866738 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:54:15.009148 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 5 15:54:15.009945 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 15:54:15.106697 systemd-networkd[1499]: lo: Link UP Nov 5 15:54:15.106709 systemd-networkd[1499]: lo: Gained carrier Nov 5 15:54:15.109522 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:54:15.110248 systemd[1]: Reached target network.target - Network. Nov 5 15:54:15.116847 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 15:54:15.121852 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 15:54:15.132739 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Nov 5 15:54:15.143219 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Nov 5 15:54:15.149297 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:54:15.149544 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:54:15.156379 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:54:15.163315 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:54:15.176401 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:54:15.178370 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:54:15.178621 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:54:15.178884 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 15:54:15.178965 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:54:15.267847 kernel: ISO 9660 Extensions: RRIP_1991A Nov 5 15:54:15.273569 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Nov 5 15:54:15.280343 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 15:54:15.289533 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:54:15.291223 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:54:15.293391 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:54:15.303663 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:54:15.305321 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:54:15.310583 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:54:15.312531 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:54:15.330144 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 5 15:54:15.337995 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:54:15.397078 systemd-networkd[1499]: eth1: Configuring with /run/systemd/network/10-c2:9e:69:66:72:ee.network. Nov 5 15:54:15.403933 systemd-networkd[1499]: eth1: Link UP Nov 5 15:54:15.404671 systemd-networkd[1499]: eth1: Gained carrier Nov 5 15:54:15.413128 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Nov 5 15:54:15.428657 systemd-networkd[1499]: eth0: Configuring with /run/systemd/network/10-7e:4c:86:ca:55:7d.network. Nov 5 15:54:15.431109 systemd-networkd[1499]: eth0: Link UP Nov 5 15:54:15.432237 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Nov 5 15:54:15.435121 systemd-networkd[1499]: eth0: Gained carrier Nov 5 15:54:15.442333 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Nov 5 15:54:15.443945 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Nov 5 15:54:15.451728 ldconfig[1450]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 15:54:15.460162 kernel: mousedev: PS/2 mouse device common for all mice Nov 5 15:54:15.455394 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 15:54:15.467271 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 15:54:15.475838 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 5 15:54:15.480831 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 5 15:54:15.486844 kernel: ACPI: button: Power Button [PWRF] Nov 5 15:54:15.499989 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 5 15:54:15.504496 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 15:54:15.505974 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:54:15.507534 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 15:54:15.509169 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 15:54:15.510314 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 5 15:54:15.511641 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 15:54:15.513113 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 15:54:15.515693 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 15:54:15.516307 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 15:54:15.516346 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:54:15.516776 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:54:15.518321 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 15:54:15.522737 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 15:54:15.530859 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 15:54:15.533211 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 15:54:15.533826 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 15:54:15.545790 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 15:54:15.547160 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 15:54:15.548985 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 15:54:15.556790 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 15:54:15.559179 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:54:15.559664 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:54:15.560234 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:54:15.560262 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:54:15.562266 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 15:54:15.568271 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 5 15:54:15.579357 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 15:54:15.582151 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 15:54:15.587104 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 15:54:15.593822 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 15:54:15.594384 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 15:54:15.600654 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Nov 5 15:54:15.600025 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 5 15:54:15.612636 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 15:54:15.625654 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 15:54:15.632191 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 15:54:15.637920 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 15:54:15.650149 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 15:54:15.661245 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 15:54:15.663325 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 15:54:15.664068 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 15:54:15.673166 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 15:54:15.684251 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 15:54:15.703896 jq[1551]: false Nov 5 15:54:15.698692 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 15:54:15.703722 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 15:54:15.705613 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 15:54:15.749228 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Refreshing passwd entry cache Nov 5 15:54:15.748470 oslogin_cache_refresh[1553]: Refreshing passwd entry cache Nov 5 15:54:15.762287 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Nov 5 15:54:15.772771 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 15:54:15.777096 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 15:54:15.786536 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Failure getting users, quitting Nov 5 15:54:15.786536 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 15:54:15.786536 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Refreshing group entry cache Nov 5 15:54:15.782090 oslogin_cache_refresh[1553]: Failure getting users, quitting Nov 5 15:54:15.782114 oslogin_cache_refresh[1553]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 15:54:15.782178 oslogin_cache_refresh[1553]: Refreshing group entry cache Nov 5 15:54:15.790132 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Failure getting groups, quitting Nov 5 15:54:15.790132 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 15:54:15.787213 oslogin_cache_refresh[1553]: Failure getting groups, quitting Nov 5 15:54:15.787233 oslogin_cache_refresh[1553]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 15:54:15.794908 jq[1565]: true Nov 5 15:54:15.799729 extend-filesystems[1552]: Found /dev/vda6 Nov 5 15:54:15.827030 kernel: Console: switching to colour dummy device 80x25 Nov 5 15:54:15.827067 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 5 15:54:15.827091 kernel: [drm] features: -context_init Nov 5 15:54:15.827113 kernel: [drm] number of scanouts: 1 Nov 5 15:54:15.827135 kernel: [drm] number of cap sets: 0 Nov 5 15:54:15.821310 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 15:54:15.827306 coreos-metadata[1548]: Nov 05 15:54:15.799 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 5 15:54:15.827306 coreos-metadata[1548]: Nov 05 15:54:15.811 INFO Fetch successful Nov 5 15:54:15.801162 dbus-daemon[1549]: [system] SELinux support is enabled Nov 5 15:54:15.834357 extend-filesystems[1552]: Found /dev/vda9 Nov 5 15:54:15.834357 extend-filesystems[1552]: Checking size of /dev/vda9 Nov 5 15:54:15.834474 update_engine[1563]: I20251105 15:54:15.828881 1563 main.cc:92] Flatcar Update Engine starting Nov 5 15:54:15.828484 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 5 15:54:15.829430 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 5 15:54:15.830017 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 15:54:15.832195 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 15:54:15.849369 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Nov 5 15:54:15.849455 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 5 15:54:15.849474 kernel: Console: switching to colour frame buffer device 128x48 Nov 5 15:54:15.849489 update_engine[1563]: I20251105 15:54:15.848621 1563 update_check_scheduler.cc:74] Next update check in 8m19s Nov 5 15:54:15.850618 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 15:54:15.854763 tar[1568]: linux-amd64/LICENSE Nov 5 15:54:15.854763 tar[1568]: linux-amd64/helm Nov 5 15:54:15.850777 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 15:54:15.854405 (ntainerd)[1597]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 15:54:15.885955 extend-filesystems[1552]: Resized partition /dev/vda9 Nov 5 15:54:16.075273 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 14138363 blocks Nov 5 15:54:16.075382 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 5 15:54:16.060878 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 15:54:16.076199 extend-filesystems[1608]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 15:54:16.061040 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Nov 5 15:54:16.061073 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 15:54:16.062447 systemd-logind[1561]: New seat seat0. Nov 5 15:54:16.084013 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 15:54:16.093165 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 15:54:16.098434 systemd[1]: Started update-engine.service - Update Engine. Nov 5 15:54:16.141138 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Nov 5 15:54:16.143864 jq[1595]: true Nov 5 15:54:16.157348 extend-filesystems[1608]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 5 15:54:16.157348 extend-filesystems[1608]: old_desc_blocks = 1, new_desc_blocks = 7 Nov 5 15:54:16.157348 extend-filesystems[1608]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Nov 5 15:54:16.159102 extend-filesystems[1552]: Resized filesystem in /dev/vda9 Nov 5 15:54:16.166442 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 15:54:16.169502 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 15:54:16.172600 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 15:54:16.285864 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 5 15:54:16.287955 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 15:54:16.331366 bash[1640]: Updated "/home/core/.ssh/authorized_keys" Nov 5 15:54:16.340985 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 15:54:16.356016 systemd[1]: Starting sshkeys.service... Nov 5 15:54:16.415560 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 5 15:54:16.417162 systemd-logind[1561]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 5 15:54:16.423642 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 5 15:54:16.444986 systemd-networkd[1499]: eth1: Gained IPv6LL Nov 5 15:54:16.446850 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Nov 5 15:54:16.469567 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 15:54:16.478185 systemd-logind[1561]: Watching system buttons on /dev/input/event2 (Power Button) Nov 5 15:54:16.486266 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 15:54:16.495973 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:54:16.507931 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 15:54:16.548988 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:54:16.593672 coreos-metadata[1645]: Nov 05 15:54:16.591 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 5 15:54:16.635849 coreos-metadata[1645]: Nov 05 15:54:16.630 INFO Fetch successful Nov 5 15:54:16.667620 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:54:16.668974 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:54:16.686200 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:54:16.696621 unknown[1645]: wrote ssh authorized keys file for user: core Nov 5 15:54:16.735607 sshd_keygen[1567]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 15:54:16.739946 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 15:54:16.765021 systemd-networkd[1499]: eth0: Gained IPv6LL Nov 5 15:54:16.765927 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Nov 5 15:54:16.791232 containerd[1597]: time="2025-11-05T15:54:16Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 15:54:16.793826 update-ssh-keys[1667]: Updated "/home/core/.ssh/authorized_keys" Nov 5 15:54:16.798224 containerd[1597]: time="2025-11-05T15:54:16.797587634Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 15:54:16.798387 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:54:16.798611 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:54:16.805030 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 5 15:54:16.812361 systemd[1]: Finished sshkeys.service. Nov 5 15:54:16.825121 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:54:16.857700 locksmithd[1620]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 15:54:16.879857 containerd[1597]: time="2025-11-05T15:54:16.876777628Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.23µs" Nov 5 15:54:16.879857 containerd[1597]: time="2025-11-05T15:54:16.879205440Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 15:54:16.879857 containerd[1597]: time="2025-11-05T15:54:16.879265011Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 15:54:16.879857 containerd[1597]: time="2025-11-05T15:54:16.879433649Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 15:54:16.879857 containerd[1597]: time="2025-11-05T15:54:16.879450935Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 15:54:16.879857 containerd[1597]: time="2025-11-05T15:54:16.879482087Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:54:16.879857 containerd[1597]: time="2025-11-05T15:54:16.879591852Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:54:16.879857 containerd[1597]: time="2025-11-05T15:54:16.879613323Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:54:16.883022 containerd[1597]: time="2025-11-05T15:54:16.882958972Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:54:16.883022 containerd[1597]: time="2025-11-05T15:54:16.883014878Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:54:16.884647 containerd[1597]: time="2025-11-05T15:54:16.883038595Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:54:16.884647 containerd[1597]: time="2025-11-05T15:54:16.883053754Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 15:54:16.884647 containerd[1597]: time="2025-11-05T15:54:16.883224242Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 15:54:16.884647 containerd[1597]: time="2025-11-05T15:54:16.883471971Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:54:16.884647 containerd[1597]: time="2025-11-05T15:54:16.883511426Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:54:16.884647 containerd[1597]: time="2025-11-05T15:54:16.883524058Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 15:54:16.888131 containerd[1597]: time="2025-11-05T15:54:16.887964264Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 15:54:16.909872 containerd[1597]: time="2025-11-05T15:54:16.908391021Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 15:54:16.910010 containerd[1597]: time="2025-11-05T15:54:16.909974997Z" level=info msg="metadata content store policy set" policy=shared Nov 5 15:54:16.915109 containerd[1597]: time="2025-11-05T15:54:16.915055155Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 15:54:16.915229 containerd[1597]: time="2025-11-05T15:54:16.915140711Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 15:54:16.915229 containerd[1597]: time="2025-11-05T15:54:16.915158915Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 15:54:16.915229 containerd[1597]: time="2025-11-05T15:54:16.915177579Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 15:54:16.915229 containerd[1597]: time="2025-11-05T15:54:16.915198972Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 15:54:16.915229 containerd[1597]: time="2025-11-05T15:54:16.915215502Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 15:54:16.915346 containerd[1597]: time="2025-11-05T15:54:16.915235072Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 15:54:16.915346 containerd[1597]: time="2025-11-05T15:54:16.915251482Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 15:54:16.915346 containerd[1597]: time="2025-11-05T15:54:16.915268888Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 15:54:16.915346 containerd[1597]: time="2025-11-05T15:54:16.915284132Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 15:54:16.915346 containerd[1597]: time="2025-11-05T15:54:16.915297013Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 15:54:16.915346 containerd[1597]: time="2025-11-05T15:54:16.915312424Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 15:54:16.915487 containerd[1597]: time="2025-11-05T15:54:16.915461118Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 15:54:16.915510 containerd[1597]: time="2025-11-05T15:54:16.915500444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 15:54:16.915532 containerd[1597]: time="2025-11-05T15:54:16.915519752Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 15:54:16.915558 containerd[1597]: time="2025-11-05T15:54:16.915536493Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 15:54:16.915558 containerd[1597]: time="2025-11-05T15:54:16.915550446Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 15:54:16.915598 containerd[1597]: time="2025-11-05T15:54:16.915565667Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 15:54:16.915598 containerd[1597]: time="2025-11-05T15:54:16.915584095Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 15:54:16.915643 containerd[1597]: time="2025-11-05T15:54:16.915597438Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 15:54:16.915643 containerd[1597]: time="2025-11-05T15:54:16.915635497Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 15:54:16.915691 containerd[1597]: time="2025-11-05T15:54:16.915652672Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 15:54:16.915691 containerd[1597]: time="2025-11-05T15:54:16.915666259Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 15:54:16.915847 containerd[1597]: time="2025-11-05T15:54:16.915801788Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 15:54:16.915883 containerd[1597]: time="2025-11-05T15:54:16.915852444Z" level=info msg="Start snapshots syncer" Nov 5 15:54:16.915914 containerd[1597]: time="2025-11-05T15:54:16.915880888Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 15:54:16.917666 containerd[1597]: time="2025-11-05T15:54:16.916263299Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 15:54:16.917666 containerd[1597]: time="2025-11-05T15:54:16.916339630Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 15:54:16.917906 containerd[1597]: time="2025-11-05T15:54:16.916554430Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 15:54:16.917906 containerd[1597]: time="2025-11-05T15:54:16.916670277Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 15:54:16.917906 containerd[1597]: time="2025-11-05T15:54:16.916699792Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 15:54:16.917906 containerd[1597]: time="2025-11-05T15:54:16.916715803Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 15:54:16.917906 containerd[1597]: time="2025-11-05T15:54:16.916729517Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 15:54:16.917906 containerd[1597]: time="2025-11-05T15:54:16.916747454Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 15:54:16.917906 containerd[1597]: time="2025-11-05T15:54:16.916762367Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 15:54:16.917906 containerd[1597]: time="2025-11-05T15:54:16.916778000Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 15:54:16.923065 containerd[1597]: time="2025-11-05T15:54:16.922658695Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 15:54:16.923065 containerd[1597]: time="2025-11-05T15:54:16.922733973Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 15:54:16.923065 containerd[1597]: time="2025-11-05T15:54:16.922754886Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 15:54:16.924820 containerd[1597]: time="2025-11-05T15:54:16.923397768Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:54:16.924820 containerd[1597]: time="2025-11-05T15:54:16.923463312Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:54:16.924820 containerd[1597]: time="2025-11-05T15:54:16.923481571Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:54:16.924820 containerd[1597]: time="2025-11-05T15:54:16.923496281Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:54:16.924820 containerd[1597]: time="2025-11-05T15:54:16.923505094Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 15:54:16.924820 containerd[1597]: time="2025-11-05T15:54:16.923519703Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 15:54:16.924820 containerd[1597]: time="2025-11-05T15:54:16.923536217Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 15:54:16.924820 containerd[1597]: time="2025-11-05T15:54:16.923981144Z" level=info msg="runtime interface created" Nov 5 15:54:16.924820 containerd[1597]: time="2025-11-05T15:54:16.924001642Z" level=info msg="created NRI interface" Nov 5 15:54:16.924820 containerd[1597]: time="2025-11-05T15:54:16.924016495Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 15:54:16.924820 containerd[1597]: time="2025-11-05T15:54:16.924046718Z" level=info msg="Connect containerd service" Nov 5 15:54:16.924820 containerd[1597]: time="2025-11-05T15:54:16.924434744Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 15:54:16.928908 containerd[1597]: time="2025-11-05T15:54:16.928367010Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 15:54:16.982975 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 15:54:16.992176 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 15:54:17.069585 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 15:54:17.069903 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 15:54:17.077836 kernel: EDAC MC: Ver: 3.0.0 Nov 5 15:54:17.078314 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 15:54:17.150172 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 15:54:17.157779 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 15:54:17.160857 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 5 15:54:17.166380 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 15:54:17.169887 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:54:17.250976 containerd[1597]: time="2025-11-05T15:54:17.250765217Z" level=info msg="Start subscribing containerd event" Nov 5 15:54:17.250976 containerd[1597]: time="2025-11-05T15:54:17.250848709Z" level=info msg="Start recovering state" Nov 5 15:54:17.251137 containerd[1597]: time="2025-11-05T15:54:17.251059982Z" level=info msg="Start event monitor" Nov 5 15:54:17.251137 containerd[1597]: time="2025-11-05T15:54:17.251078454Z" level=info msg="Start cni network conf syncer for default" Nov 5 15:54:17.251137 containerd[1597]: time="2025-11-05T15:54:17.251093133Z" level=info msg="Start streaming server" Nov 5 15:54:17.251137 containerd[1597]: time="2025-11-05T15:54:17.251103963Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 15:54:17.251137 containerd[1597]: time="2025-11-05T15:54:17.251111901Z" level=info msg="runtime interface starting up..." Nov 5 15:54:17.251137 containerd[1597]: time="2025-11-05T15:54:17.251118253Z" level=info msg="starting plugins..." Nov 5 15:54:17.251137 containerd[1597]: time="2025-11-05T15:54:17.251132589Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 15:54:17.252305 containerd[1597]: time="2025-11-05T15:54:17.252251371Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 15:54:17.252385 containerd[1597]: time="2025-11-05T15:54:17.252364629Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 15:54:17.253890 containerd[1597]: time="2025-11-05T15:54:17.253280214Z" level=info msg="containerd successfully booted in 0.477988s" Nov 5 15:54:17.253713 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 15:54:17.372625 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 15:54:17.376592 systemd[1]: Started sshd@0-134.199.212.97:22-139.178.68.195:42716.service - OpenSSH per-connection server daemon (139.178.68.195:42716). Nov 5 15:54:17.488902 tar[1568]: linux-amd64/README.md Nov 5 15:54:17.512829 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 15:54:17.513923 sshd[1717]: Accepted publickey for core from 139.178.68.195 port 42716 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:54:17.518399 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:54:17.529119 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 15:54:17.532757 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 15:54:17.545610 systemd-logind[1561]: New session 1 of user core. Nov 5 15:54:17.569671 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 15:54:17.576358 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 15:54:17.595193 (systemd)[1725]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 15:54:17.601396 systemd-logind[1561]: New session c1 of user core. Nov 5 15:54:17.798169 systemd[1725]: Queued start job for default target default.target. Nov 5 15:54:17.803308 systemd[1725]: Created slice app.slice - User Application Slice. Nov 5 15:54:17.803517 systemd[1725]: Reached target paths.target - Paths. Nov 5 15:54:17.803587 systemd[1725]: Reached target timers.target - Timers. Nov 5 15:54:17.807051 systemd[1725]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 15:54:17.831996 systemd[1725]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 15:54:17.833522 systemd[1725]: Reached target sockets.target - Sockets. Nov 5 15:54:17.833747 systemd[1725]: Reached target basic.target - Basic System. Nov 5 15:54:17.833914 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 15:54:17.834187 systemd[1725]: Reached target default.target - Main User Target. Nov 5 15:54:17.834243 systemd[1725]: Startup finished in 220ms. Nov 5 15:54:17.843065 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 15:54:17.924869 systemd[1]: Started sshd@1-134.199.212.97:22-139.178.68.195:42720.service - OpenSSH per-connection server daemon (139.178.68.195:42720). Nov 5 15:54:17.988995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:54:17.990310 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 15:54:17.992286 systemd[1]: Startup finished in 2.344s (kernel) + 6.265s (initrd) + 6.133s (userspace) = 14.743s. Nov 5 15:54:18.004585 (kubelet)[1744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:54:18.016642 sshd[1736]: Accepted publickey for core from 139.178.68.195 port 42720 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:54:18.018071 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:54:18.023708 systemd-logind[1561]: New session 2 of user core. Nov 5 15:54:18.030100 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 15:54:18.100475 sshd[1745]: Connection closed by 139.178.68.195 port 42720 Nov 5 15:54:18.103350 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Nov 5 15:54:18.112301 systemd[1]: sshd@1-134.199.212.97:22-139.178.68.195:42720.service: Deactivated successfully. Nov 5 15:54:18.115607 systemd[1]: session-2.scope: Deactivated successfully. Nov 5 15:54:18.118142 systemd-logind[1561]: Session 2 logged out. Waiting for processes to exit. Nov 5 15:54:18.121196 systemd[1]: Started sshd@2-134.199.212.97:22-139.178.68.195:42722.service - OpenSSH per-connection server daemon (139.178.68.195:42722). Nov 5 15:54:18.125158 systemd-logind[1561]: Removed session 2. Nov 5 15:54:18.200117 sshd[1755]: Accepted publickey for core from 139.178.68.195 port 42722 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:54:18.201508 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:54:18.209910 systemd-logind[1561]: New session 3 of user core. Nov 5 15:54:18.213011 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 15:54:18.272872 sshd[1758]: Connection closed by 139.178.68.195 port 42722 Nov 5 15:54:18.275086 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Nov 5 15:54:18.286074 systemd[1]: sshd@2-134.199.212.97:22-139.178.68.195:42722.service: Deactivated successfully. Nov 5 15:54:18.288888 systemd[1]: session-3.scope: Deactivated successfully. Nov 5 15:54:18.290301 systemd-logind[1561]: Session 3 logged out. Waiting for processes to exit. Nov 5 15:54:18.294346 systemd[1]: Started sshd@3-134.199.212.97:22-139.178.68.195:42736.service - OpenSSH per-connection server daemon (139.178.68.195:42736). Nov 5 15:54:18.297049 systemd-logind[1561]: Removed session 3. Nov 5 15:54:18.365885 sshd[1764]: Accepted publickey for core from 139.178.68.195 port 42736 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:54:18.368607 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:54:18.381220 systemd-logind[1561]: New session 4 of user core. Nov 5 15:54:18.383034 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 15:54:18.454711 sshd[1771]: Connection closed by 139.178.68.195 port 42736 Nov 5 15:54:18.455537 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Nov 5 15:54:18.470265 systemd[1]: sshd@3-134.199.212.97:22-139.178.68.195:42736.service: Deactivated successfully. Nov 5 15:54:18.475622 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 15:54:18.477922 systemd-logind[1561]: Session 4 logged out. Waiting for processes to exit. Nov 5 15:54:18.484276 systemd[1]: Started sshd@4-134.199.212.97:22-139.178.68.195:42742.service - OpenSSH per-connection server daemon (139.178.68.195:42742). Nov 5 15:54:18.487765 systemd-logind[1561]: Removed session 4. Nov 5 15:54:18.563916 sshd[1778]: Accepted publickey for core from 139.178.68.195 port 42742 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:54:18.566625 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:54:18.574315 systemd-logind[1561]: New session 5 of user core. Nov 5 15:54:18.582064 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 15:54:18.647766 kubelet[1744]: E1105 15:54:18.647644 1744 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:54:18.651623 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:54:18.651828 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:54:18.655016 systemd[1]: kubelet.service: Consumed 1.136s CPU time, 263.2M memory peak. Nov 5 15:54:18.660471 sudo[1782]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 15:54:18.660796 sudo[1782]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:54:18.673548 sudo[1782]: pam_unix(sudo:session): session closed for user root Nov 5 15:54:18.677458 sshd[1781]: Connection closed by 139.178.68.195 port 42742 Nov 5 15:54:18.678162 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Nov 5 15:54:18.690323 systemd[1]: sshd@4-134.199.212.97:22-139.178.68.195:42742.service: Deactivated successfully. Nov 5 15:54:18.692610 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 15:54:18.694625 systemd-logind[1561]: Session 5 logged out. Waiting for processes to exit. Nov 5 15:54:18.698528 systemd[1]: Started sshd@5-134.199.212.97:22-139.178.68.195:42750.service - OpenSSH per-connection server daemon (139.178.68.195:42750). Nov 5 15:54:18.699399 systemd-logind[1561]: Removed session 5. Nov 5 15:54:18.770925 sshd[1789]: Accepted publickey for core from 139.178.68.195 port 42750 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:54:18.772685 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:54:18.779990 systemd-logind[1561]: New session 6 of user core. Nov 5 15:54:18.786063 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 15:54:18.850113 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 15:54:18.850969 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:54:18.857610 sudo[1794]: pam_unix(sudo:session): session closed for user root Nov 5 15:54:18.867096 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 15:54:18.867464 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:54:18.879853 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:54:18.929534 augenrules[1816]: No rules Nov 5 15:54:18.931678 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:54:18.932018 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:54:18.933620 sudo[1793]: pam_unix(sudo:session): session closed for user root Nov 5 15:54:18.938410 sshd[1792]: Connection closed by 139.178.68.195 port 42750 Nov 5 15:54:18.939037 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Nov 5 15:54:18.951602 systemd[1]: sshd@5-134.199.212.97:22-139.178.68.195:42750.service: Deactivated successfully. Nov 5 15:54:18.953944 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 15:54:18.955090 systemd-logind[1561]: Session 6 logged out. Waiting for processes to exit. Nov 5 15:54:18.959290 systemd[1]: Started sshd@6-134.199.212.97:22-139.178.68.195:42754.service - OpenSSH per-connection server daemon (139.178.68.195:42754). Nov 5 15:54:18.960981 systemd-logind[1561]: Removed session 6. Nov 5 15:54:19.029094 sshd[1825]: Accepted publickey for core from 139.178.68.195 port 42754 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:54:19.031050 sshd-session[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:54:19.037643 systemd-logind[1561]: New session 7 of user core. Nov 5 15:54:19.056286 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 15:54:19.118844 sudo[1829]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 15:54:19.119229 sudo[1829]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:54:19.748432 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 15:54:19.771611 (dockerd)[1846]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 15:54:20.215752 dockerd[1846]: time="2025-11-05T15:54:20.215536871Z" level=info msg="Starting up" Nov 5 15:54:20.217238 dockerd[1846]: time="2025-11-05T15:54:20.217188214Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 15:54:20.240241 dockerd[1846]: time="2025-11-05T15:54:20.240129105Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 15:54:20.352245 dockerd[1846]: time="2025-11-05T15:54:20.351780257Z" level=info msg="Loading containers: start." Nov 5 15:54:20.367842 kernel: Initializing XFRM netlink socket Nov 5 15:54:20.619225 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Nov 5 15:54:20.630689 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Nov 5 15:54:20.678088 systemd-networkd[1499]: docker0: Link UP Nov 5 15:54:20.679053 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Nov 5 15:54:20.680659 dockerd[1846]: time="2025-11-05T15:54:20.680606849Z" level=info msg="Loading containers: done." Nov 5 15:54:20.698842 dockerd[1846]: time="2025-11-05T15:54:20.698652787Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 15:54:20.698842 dockerd[1846]: time="2025-11-05T15:54:20.698788124Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 15:54:20.699123 dockerd[1846]: time="2025-11-05T15:54:20.698926129Z" level=info msg="Initializing buildkit" Nov 5 15:54:20.700940 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3796697181-merged.mount: Deactivated successfully. Nov 5 15:54:20.724520 dockerd[1846]: time="2025-11-05T15:54:20.724467031Z" level=info msg="Completed buildkit initialization" Nov 5 15:54:20.731037 dockerd[1846]: time="2025-11-05T15:54:20.730923590Z" level=info msg="Daemon has completed initialization" Nov 5 15:54:20.731329 dockerd[1846]: time="2025-11-05T15:54:20.731274654Z" level=info msg="API listen on /run/docker.sock" Nov 5 15:54:20.732065 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 15:54:21.645679 containerd[1597]: time="2025-11-05T15:54:21.645219275Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 5 15:54:22.128274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4248465035.mount: Deactivated successfully. Nov 5 15:54:23.614775 containerd[1597]: time="2025-11-05T15:54:23.614655221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:23.617103 containerd[1597]: time="2025-11-05T15:54:23.616698112Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 5 15:54:23.617910 containerd[1597]: time="2025-11-05T15:54:23.617860247Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:23.621786 containerd[1597]: time="2025-11-05T15:54:23.621733420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:23.623593 containerd[1597]: time="2025-11-05T15:54:23.623532480Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.978241328s" Nov 5 15:54:23.623797 containerd[1597]: time="2025-11-05T15:54:23.623775524Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 5 15:54:23.625857 containerd[1597]: time="2025-11-05T15:54:23.625709635Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 5 15:54:24.994758 containerd[1597]: time="2025-11-05T15:54:24.994681170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:24.995916 containerd[1597]: time="2025-11-05T15:54:24.995746246Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 5 15:54:24.996524 containerd[1597]: time="2025-11-05T15:54:24.996484142Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:24.999726 containerd[1597]: time="2025-11-05T15:54:24.999663489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:25.001769 containerd[1597]: time="2025-11-05T15:54:25.001177060Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.375422466s" Nov 5 15:54:25.001769 containerd[1597]: time="2025-11-05T15:54:25.001222715Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 5 15:54:25.002153 containerd[1597]: time="2025-11-05T15:54:25.002082685Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 5 15:54:26.290847 containerd[1597]: time="2025-11-05T15:54:26.290750820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:26.293274 containerd[1597]: time="2025-11-05T15:54:26.293213755Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 5 15:54:26.294035 containerd[1597]: time="2025-11-05T15:54:26.293969602Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:26.298834 containerd[1597]: time="2025-11-05T15:54:26.298505764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:26.300256 containerd[1597]: time="2025-11-05T15:54:26.300203381Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.297969151s" Nov 5 15:54:26.300493 containerd[1597]: time="2025-11-05T15:54:26.300443337Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 5 15:54:26.301101 containerd[1597]: time="2025-11-05T15:54:26.301058439Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 5 15:54:27.474372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1589472330.mount: Deactivated successfully. Nov 5 15:54:28.041211 containerd[1597]: time="2025-11-05T15:54:28.041146110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:28.042133 containerd[1597]: time="2025-11-05T15:54:28.042089219Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 5 15:54:28.043674 containerd[1597]: time="2025-11-05T15:54:28.042606790Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:28.044424 containerd[1597]: time="2025-11-05T15:54:28.044388272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:28.045174 containerd[1597]: time="2025-11-05T15:54:28.045139068Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.743935778s" Nov 5 15:54:28.045290 containerd[1597]: time="2025-11-05T15:54:28.045273847Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 5 15:54:28.046022 containerd[1597]: time="2025-11-05T15:54:28.046003448Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 5 15:54:28.047480 systemd-resolved[1288]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Nov 5 15:54:28.639286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3448394458.mount: Deactivated successfully. Nov 5 15:54:28.903557 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 15:54:28.909386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:54:29.125974 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:54:29.136263 (kubelet)[2200]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:54:29.202321 kubelet[2200]: E1105 15:54:29.201752 2200 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:54:29.207695 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:54:29.207922 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:54:29.209296 systemd[1]: kubelet.service: Consumed 218ms CPU time, 110.3M memory peak. Nov 5 15:54:29.557870 containerd[1597]: time="2025-11-05T15:54:29.557344057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:29.558798 containerd[1597]: time="2025-11-05T15:54:29.558406030Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 5 15:54:29.559519 containerd[1597]: time="2025-11-05T15:54:29.559480661Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:29.562182 containerd[1597]: time="2025-11-05T15:54:29.562134375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:29.563743 containerd[1597]: time="2025-11-05T15:54:29.563691385Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.517585778s" Nov 5 15:54:29.563743 containerd[1597]: time="2025-11-05T15:54:29.563739511Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 5 15:54:29.564955 containerd[1597]: time="2025-11-05T15:54:29.564917300Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 5 15:54:30.104612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4195023940.mount: Deactivated successfully. Nov 5 15:54:30.111787 containerd[1597]: time="2025-11-05T15:54:30.110852726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:54:30.111787 containerd[1597]: time="2025-11-05T15:54:30.111737994Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 5 15:54:30.112081 containerd[1597]: time="2025-11-05T15:54:30.112051952Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:54:30.113741 containerd[1597]: time="2025-11-05T15:54:30.113710026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:54:30.114382 containerd[1597]: time="2025-11-05T15:54:30.114348861Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 549.395428ms" Nov 5 15:54:30.114382 containerd[1597]: time="2025-11-05T15:54:30.114382614Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 5 15:54:30.115271 containerd[1597]: time="2025-11-05T15:54:30.115247975Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 5 15:54:30.630104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1141724855.mount: Deactivated successfully. Nov 5 15:54:31.101049 systemd-resolved[1288]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Nov 5 15:54:32.449978 containerd[1597]: time="2025-11-05T15:54:32.449913828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:32.451568 containerd[1597]: time="2025-11-05T15:54:32.451524140Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 5 15:54:32.452651 containerd[1597]: time="2025-11-05T15:54:32.452612827Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:32.457305 containerd[1597]: time="2025-11-05T15:54:32.457235900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:32.459986 containerd[1597]: time="2025-11-05T15:54:32.459894993Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.344524901s" Nov 5 15:54:32.459986 containerd[1597]: time="2025-11-05T15:54:32.459962893Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 5 15:54:35.422645 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:54:35.422838 systemd[1]: kubelet.service: Consumed 218ms CPU time, 110.3M memory peak. Nov 5 15:54:35.425662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:54:35.461469 systemd[1]: Reload requested from client PID 2292 ('systemctl') (unit session-7.scope)... Nov 5 15:54:35.461493 systemd[1]: Reloading... Nov 5 15:54:35.608850 zram_generator::config[2336]: No configuration found. Nov 5 15:54:35.949679 systemd[1]: Reloading finished in 487 ms. Nov 5 15:54:36.040275 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 15:54:36.040367 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 15:54:36.040623 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:54:36.040676 systemd[1]: kubelet.service: Consumed 126ms CPU time, 97.4M memory peak. Nov 5 15:54:36.044766 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:54:36.218492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:54:36.231366 (kubelet)[2391]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:54:36.282535 kubelet[2391]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:54:36.282535 kubelet[2391]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:54:36.282535 kubelet[2391]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:54:36.282535 kubelet[2391]: I1105 15:54:36.282443 2391 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:54:36.879884 kubelet[2391]: I1105 15:54:36.879499 2391 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 5 15:54:36.879884 kubelet[2391]: I1105 15:54:36.879557 2391 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:54:36.880120 kubelet[2391]: I1105 15:54:36.880086 2391 server.go:954] "Client rotation is on, will bootstrap in background" Nov 5 15:54:36.911798 kubelet[2391]: E1105 15:54:36.911750 2391 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://134.199.212.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 134.199.212.97:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:54:36.920263 kubelet[2391]: I1105 15:54:36.920209 2391 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:54:36.934310 kubelet[2391]: I1105 15:54:36.934229 2391 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:54:36.939833 kubelet[2391]: I1105 15:54:36.939607 2391 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 15:54:36.942381 kubelet[2391]: I1105 15:54:36.942258 2391 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:54:36.942948 kubelet[2391]: I1105 15:54:36.942606 2391 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.1-e-b20d930803","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:54:36.945338 kubelet[2391]: I1105 15:54:36.945289 2391 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:54:36.945837 kubelet[2391]: I1105 15:54:36.945521 2391 container_manager_linux.go:304] "Creating device plugin manager" Nov 5 15:54:36.947096 kubelet[2391]: I1105 15:54:36.947062 2391 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:54:36.951934 kubelet[2391]: I1105 15:54:36.951886 2391 kubelet.go:446] "Attempting to sync node with API server" Nov 5 15:54:36.952567 kubelet[2391]: I1105 15:54:36.952160 2391 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:54:36.952567 kubelet[2391]: I1105 15:54:36.952206 2391 kubelet.go:352] "Adding apiserver pod source" Nov 5 15:54:36.952567 kubelet[2391]: I1105 15:54:36.952223 2391 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:54:36.963342 kubelet[2391]: W1105 15:54:36.963273 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://134.199.212.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.1-e-b20d930803&limit=500&resourceVersion=0": dial tcp 134.199.212.97:6443: connect: connection refused Nov 5 15:54:36.963515 kubelet[2391]: E1105 15:54:36.963355 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://134.199.212.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.1-e-b20d930803&limit=500&resourceVersion=0\": dial tcp 134.199.212.97:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:54:36.965679 kubelet[2391]: I1105 15:54:36.964796 2391 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:54:36.966769 kubelet[2391]: W1105 15:54:36.966686 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://134.199.212.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 134.199.212.97:6443: connect: connection refused Nov 5 15:54:36.967092 kubelet[2391]: E1105 15:54:36.967060 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://134.199.212.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 134.199.212.97:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:54:36.968321 kubelet[2391]: I1105 15:54:36.968295 2391 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 5 15:54:36.968980 kubelet[2391]: W1105 15:54:36.968942 2391 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 15:54:36.969742 kubelet[2391]: I1105 15:54:36.969717 2391 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 15:54:36.969850 kubelet[2391]: I1105 15:54:36.969755 2391 server.go:1287] "Started kubelet" Nov 5 15:54:36.971463 kubelet[2391]: I1105 15:54:36.971310 2391 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:54:36.980188 kubelet[2391]: I1105 15:54:36.980137 2391 server.go:479] "Adding debug handlers to kubelet server" Nov 5 15:54:36.983411 kubelet[2391]: I1105 15:54:36.980210 2391 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:54:36.984046 kubelet[2391]: I1105 15:54:36.984009 2391 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:54:36.985932 kubelet[2391]: E1105 15:54:36.982821 2391 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://134.199.212.97:6443/api/v1/namespaces/default/events\": dial tcp 134.199.212.97:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4487.0.1-e-b20d930803.1875275568d253ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4487.0.1-e-b20d930803,UID:ci-4487.0.1-e-b20d930803,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4487.0.1-e-b20d930803,},FirstTimestamp:2025-11-05 15:54:36.969735086 +0000 UTC m=+0.733135964,LastTimestamp:2025-11-05 15:54:36.969735086 +0000 UTC m=+0.733135964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487.0.1-e-b20d930803,}" Nov 5 15:54:36.988298 kubelet[2391]: I1105 15:54:36.988266 2391 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:54:36.989755 kubelet[2391]: I1105 15:54:36.989655 2391 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:54:36.993144 kubelet[2391]: E1105 15:54:36.993114 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.1-e-b20d930803\" not found" Nov 5 15:54:36.993345 kubelet[2391]: I1105 15:54:36.993330 2391 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 15:54:36.993723 kubelet[2391]: I1105 15:54:36.993697 2391 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 15:54:36.993946 kubelet[2391]: I1105 15:54:36.993930 2391 reconciler.go:26] "Reconciler: start to sync state" Nov 5 15:54:36.994572 kubelet[2391]: W1105 15:54:36.994519 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://134.199.212.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 134.199.212.97:6443: connect: connection refused Nov 5 15:54:36.994741 kubelet[2391]: E1105 15:54:36.994710 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://134.199.212.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 134.199.212.97:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:54:36.995549 kubelet[2391]: E1105 15:54:36.995493 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.212.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.1-e-b20d930803?timeout=10s\": dial tcp 134.199.212.97:6443: connect: connection refused" interval="200ms" Nov 5 15:54:36.996532 kubelet[2391]: I1105 15:54:36.996492 2391 factory.go:221] Registration of the systemd container factory successfully Nov 5 15:54:36.996796 kubelet[2391]: I1105 15:54:36.996775 2391 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:54:37.013147 kubelet[2391]: I1105 15:54:37.013117 2391 factory.go:221] Registration of the containerd container factory successfully Nov 5 15:54:37.030880 kubelet[2391]: I1105 15:54:37.030574 2391 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 5 15:54:37.033711 kubelet[2391]: I1105 15:54:37.033640 2391 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 5 15:54:37.033711 kubelet[2391]: I1105 15:54:37.033687 2391 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 5 15:54:37.033711 kubelet[2391]: I1105 15:54:37.033719 2391 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:54:37.033971 kubelet[2391]: I1105 15:54:37.033731 2391 kubelet.go:2382] "Starting kubelet main sync loop" Nov 5 15:54:37.033971 kubelet[2391]: E1105 15:54:37.033834 2391 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:54:37.043284 kubelet[2391]: E1105 15:54:37.043192 2391 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 15:54:37.043428 kubelet[2391]: W1105 15:54:37.043372 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://134.199.212.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 134.199.212.97:6443: connect: connection refused Nov 5 15:54:37.043573 kubelet[2391]: E1105 15:54:37.043431 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://134.199.212.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 134.199.212.97:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:54:37.048710 kubelet[2391]: I1105 15:54:37.048679 2391 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:54:37.048710 kubelet[2391]: I1105 15:54:37.048705 2391 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:54:37.048916 kubelet[2391]: I1105 15:54:37.048733 2391 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:54:37.050686 kubelet[2391]: I1105 15:54:37.050642 2391 policy_none.go:49] "None policy: Start" Nov 5 15:54:37.050686 kubelet[2391]: I1105 15:54:37.050678 2391 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 15:54:37.050987 kubelet[2391]: I1105 15:54:37.050702 2391 state_mem.go:35] "Initializing new in-memory state store" Nov 5 15:54:37.057221 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 15:54:37.067993 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 15:54:37.071582 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 15:54:37.080096 kubelet[2391]: I1105 15:54:37.080061 2391 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 5 15:54:37.080333 kubelet[2391]: I1105 15:54:37.080319 2391 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:54:37.080376 kubelet[2391]: I1105 15:54:37.080338 2391 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:54:37.081410 kubelet[2391]: I1105 15:54:37.081344 2391 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:54:37.083910 kubelet[2391]: E1105 15:54:37.083886 2391 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:54:37.084011 kubelet[2391]: E1105 15:54:37.083936 2391 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4487.0.1-e-b20d930803\" not found" Nov 5 15:54:37.146438 systemd[1]: Created slice kubepods-burstable-podc1c681daedaa39f7c2774b5cfb7eb868.slice - libcontainer container kubepods-burstable-podc1c681daedaa39f7c2774b5cfb7eb868.slice. Nov 5 15:54:37.160651 kubelet[2391]: E1105 15:54:37.160600 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-e-b20d930803\" not found" node="ci-4487.0.1-e-b20d930803" Nov 5 15:54:37.163766 systemd[1]: Created slice kubepods-burstable-pod9f4291a225dfe7f7ff6ef46274d22413.slice - libcontainer container kubepods-burstable-pod9f4291a225dfe7f7ff6ef46274d22413.slice. Nov 5 15:54:37.172872 kubelet[2391]: E1105 15:54:37.172839 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-e-b20d930803\" not found" node="ci-4487.0.1-e-b20d930803" Nov 5 15:54:37.177184 systemd[1]: Created slice kubepods-burstable-pod459f182551f230a41304e585519ce557.slice - libcontainer container kubepods-burstable-pod459f182551f230a41304e585519ce557.slice. Nov 5 15:54:37.180428 kubelet[2391]: E1105 15:54:37.180380 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-e-b20d930803\" not found" node="ci-4487.0.1-e-b20d930803" Nov 5 15:54:37.181634 kubelet[2391]: I1105 15:54:37.181611 2391 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-e-b20d930803" Nov 5 15:54:37.182149 kubelet[2391]: E1105 15:54:37.182121 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://134.199.212.97:6443/api/v1/nodes\": dial tcp 134.199.212.97:6443: connect: connection refused" node="ci-4487.0.1-e-b20d930803" Nov 5 15:54:37.195157 kubelet[2391]: I1105 15:54:37.195105 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f4291a225dfe7f7ff6ef46274d22413-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.1-e-b20d930803\" (UID: \"9f4291a225dfe7f7ff6ef46274d22413\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-e-b20d930803" Nov 5 15:54:37.195326 kubelet[2391]: I1105 15:54:37.195208 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9f4291a225dfe7f7ff6ef46274d22413-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.1-e-b20d930803\" (UID: \"9f4291a225dfe7f7ff6ef46274d22413\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-e-b20d930803" Nov 5 15:54:37.195326 kubelet[2391]: I1105 15:54:37.195247 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1c681daedaa39f7c2774b5cfb7eb868-ca-certs\") pod \"kube-apiserver-ci-4487.0.1-e-b20d930803\" (UID: \"c1c681daedaa39f7c2774b5cfb7eb868\") " pod="kube-system/kube-apiserver-ci-4487.0.1-e-b20d930803" Nov 5 15:54:37.195326 kubelet[2391]: I1105 15:54:37.195264 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1c681daedaa39f7c2774b5cfb7eb868-k8s-certs\") pod \"kube-apiserver-ci-4487.0.1-e-b20d930803\" (UID: \"c1c681daedaa39f7c2774b5cfb7eb868\") " pod="kube-system/kube-apiserver-ci-4487.0.1-e-b20d930803" Nov 5 15:54:37.195326 kubelet[2391]: I1105 15:54:37.195283 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1c681daedaa39f7c2774b5cfb7eb868-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.1-e-b20d930803\" (UID: \"c1c681daedaa39f7c2774b5cfb7eb868\") " pod="kube-system/kube-apiserver-ci-4487.0.1-e-b20d930803" Nov 5 15:54:37.195326 kubelet[2391]: I1105 15:54:37.195319 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f4291a225dfe7f7ff6ef46274d22413-ca-certs\") pod \"kube-controller-manager-ci-4487.0.1-e-b20d930803\" (UID: \"9f4291a225dfe7f7ff6ef46274d22413\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-e-b20d930803" Nov 5 15:54:37.195472 kubelet[2391]: I1105 15:54:37.195335 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f4291a225dfe7f7ff6ef46274d22413-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.1-e-b20d930803\" (UID: \"9f4291a225dfe7f7ff6ef46274d22413\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-e-b20d930803" Nov 5 15:54:37.195472 kubelet[2391]: I1105 15:54:37.195351 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f4291a225dfe7f7ff6ef46274d22413-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.1-e-b20d930803\" (UID: \"9f4291a225dfe7f7ff6ef46274d22413\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-e-b20d930803" Nov 5 15:54:37.195472 kubelet[2391]: I1105 15:54:37.195391 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/459f182551f230a41304e585519ce557-kubeconfig\") pod \"kube-scheduler-ci-4487.0.1-e-b20d930803\" (UID: \"459f182551f230a41304e585519ce557\") " pod="kube-system/kube-scheduler-ci-4487.0.1-e-b20d930803" Nov 5 15:54:37.196845 kubelet[2391]: E1105 15:54:37.196417 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.212.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.1-e-b20d930803?timeout=10s\": dial tcp 134.199.212.97:6443: connect: connection refused" interval="400ms" Nov 5 15:54:37.383889 kubelet[2391]: I1105 15:54:37.383832 2391 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-e-b20d930803" Nov 5 15:54:37.384469 kubelet[2391]: E1105 15:54:37.384374 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://134.199.212.97:6443/api/v1/nodes\": dial tcp 134.199.212.97:6443: connect: connection refused" node="ci-4487.0.1-e-b20d930803" Nov 5 15:54:37.461626 kubelet[2391]: E1105 15:54:37.461443 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:37.462712 containerd[1597]: time="2025-11-05T15:54:37.462655132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.1-e-b20d930803,Uid:c1c681daedaa39f7c2774b5cfb7eb868,Namespace:kube-system,Attempt:0,}" Nov 5 15:54:37.474521 kubelet[2391]: E1105 15:54:37.474210 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:37.475280 containerd[1597]: time="2025-11-05T15:54:37.475004310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.1-e-b20d930803,Uid:9f4291a225dfe7f7ff6ef46274d22413,Namespace:kube-system,Attempt:0,}" Nov 5 15:54:37.483061 kubelet[2391]: E1105 15:54:37.483022 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:37.490832 containerd[1597]: time="2025-11-05T15:54:37.490777350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.1-e-b20d930803,Uid:459f182551f230a41304e585519ce557,Namespace:kube-system,Attempt:0,}" Nov 5 15:54:37.577276 containerd[1597]: time="2025-11-05T15:54:37.576965215Z" level=info msg="connecting to shim ed0fb0297613ed8303c648b2a333ffbf8a39937a0a0bdc9efab6e61f831e9b1f" address="unix:///run/containerd/s/682196d3790848d402be1700c4144964069bc1951f7d25407a969a4268ef9c76" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:54:37.581596 containerd[1597]: time="2025-11-05T15:54:37.581550227Z" level=info msg="connecting to shim 37dbca68a4b41741d5a99ca84a8852d4b7f85f229b82f45b7a866c6008ba35fa" address="unix:///run/containerd/s/db8c37235141f19789d467884f6ff437b0c544077e1bc2eb30c4f0917fcdb5e5" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:54:37.583515 containerd[1597]: time="2025-11-05T15:54:37.583470372Z" level=info msg="connecting to shim 52e01f04187bab7ed2fabc649e1ddbf6882d522af42f2d105bb77ca61dc86bfc" address="unix:///run/containerd/s/ed645a90ee15d6101b2e42575ed3360fb21ba8de041fea82c9010623591c5054" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:54:37.598803 kubelet[2391]: E1105 15:54:37.598550 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.212.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.1-e-b20d930803?timeout=10s\": dial tcp 134.199.212.97:6443: connect: connection refused" interval="800ms" Nov 5 15:54:37.700850 systemd[1]: Started cri-containerd-37dbca68a4b41741d5a99ca84a8852d4b7f85f229b82f45b7a866c6008ba35fa.scope - libcontainer container 37dbca68a4b41741d5a99ca84a8852d4b7f85f229b82f45b7a866c6008ba35fa. Nov 5 15:54:37.709635 systemd[1]: Started cri-containerd-52e01f04187bab7ed2fabc649e1ddbf6882d522af42f2d105bb77ca61dc86bfc.scope - libcontainer container 52e01f04187bab7ed2fabc649e1ddbf6882d522af42f2d105bb77ca61dc86bfc. Nov 5 15:54:37.712197 systemd[1]: Started cri-containerd-ed0fb0297613ed8303c648b2a333ffbf8a39937a0a0bdc9efab6e61f831e9b1f.scope - libcontainer container ed0fb0297613ed8303c648b2a333ffbf8a39937a0a0bdc9efab6e61f831e9b1f. Nov 5 15:54:37.786518 kubelet[2391]: I1105 15:54:37.786491 2391 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-e-b20d930803" Nov 5 15:54:37.788629 kubelet[2391]: E1105 15:54:37.788586 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://134.199.212.97:6443/api/v1/nodes\": dial tcp 134.199.212.97:6443: connect: connection refused" node="ci-4487.0.1-e-b20d930803" Nov 5 15:54:37.806264 kubelet[2391]: W1105 15:54:37.806062 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://134.199.212.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.1-e-b20d930803&limit=500&resourceVersion=0": dial tcp 134.199.212.97:6443: connect: connection refused Nov 5 15:54:37.806264 kubelet[2391]: E1105 15:54:37.806126 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://134.199.212.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.1-e-b20d930803&limit=500&resourceVersion=0\": dial tcp 134.199.212.97:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:54:37.807128 containerd[1597]: time="2025-11-05T15:54:37.807093361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.1-e-b20d930803,Uid:459f182551f230a41304e585519ce557,Namespace:kube-system,Attempt:0,} returns sandbox id \"52e01f04187bab7ed2fabc649e1ddbf6882d522af42f2d105bb77ca61dc86bfc\"" Nov 5 15:54:37.809330 containerd[1597]: time="2025-11-05T15:54:37.809281198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.1-e-b20d930803,Uid:c1c681daedaa39f7c2774b5cfb7eb868,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed0fb0297613ed8303c648b2a333ffbf8a39937a0a0bdc9efab6e61f831e9b1f\"" Nov 5 15:54:37.809748 kubelet[2391]: E1105 15:54:37.809727 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:37.810774 kubelet[2391]: E1105 15:54:37.810556 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:37.814580 containerd[1597]: time="2025-11-05T15:54:37.814276769Z" level=info msg="CreateContainer within sandbox \"ed0fb0297613ed8303c648b2a333ffbf8a39937a0a0bdc9efab6e61f831e9b1f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 15:54:37.820175 containerd[1597]: time="2025-11-05T15:54:37.820137001Z" level=info msg="CreateContainer within sandbox \"52e01f04187bab7ed2fabc649e1ddbf6882d522af42f2d105bb77ca61dc86bfc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 15:54:37.833928 containerd[1597]: time="2025-11-05T15:54:37.833882863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.1-e-b20d930803,Uid:9f4291a225dfe7f7ff6ef46274d22413,Namespace:kube-system,Attempt:0,} returns sandbox id \"37dbca68a4b41741d5a99ca84a8852d4b7f85f229b82f45b7a866c6008ba35fa\"" Nov 5 15:54:37.835154 kubelet[2391]: E1105 15:54:37.834959 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:37.838113 containerd[1597]: time="2025-11-05T15:54:37.837003737Z" level=info msg="Container 9f6d37ddc9b355950aa276569857f5ecce4856d051276abadf737e1490c8c9f4: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:54:37.838113 containerd[1597]: time="2025-11-05T15:54:37.837029231Z" level=info msg="Container 55aec50ed5e2b7fe8c727c659b4ee86874bcfeac65406117826f2dff424cb0ea: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:54:37.842708 containerd[1597]: time="2025-11-05T15:54:37.840081000Z" level=info msg="CreateContainer within sandbox \"37dbca68a4b41741d5a99ca84a8852d4b7f85f229b82f45b7a866c6008ba35fa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 15:54:37.851437 containerd[1597]: time="2025-11-05T15:54:37.851390583Z" level=info msg="Container 026b7215181445b54bebeb4ebd4880ea353c1bb81ed0f3d745c190c8d3c1092f: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:54:37.855505 containerd[1597]: time="2025-11-05T15:54:37.855212979Z" level=info msg="CreateContainer within sandbox \"52e01f04187bab7ed2fabc649e1ddbf6882d522af42f2d105bb77ca61dc86bfc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9f6d37ddc9b355950aa276569857f5ecce4856d051276abadf737e1490c8c9f4\"" Nov 5 15:54:37.855505 containerd[1597]: time="2025-11-05T15:54:37.855403509Z" level=info msg="CreateContainer within sandbox \"ed0fb0297613ed8303c648b2a333ffbf8a39937a0a0bdc9efab6e61f831e9b1f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"55aec50ed5e2b7fe8c727c659b4ee86874bcfeac65406117826f2dff424cb0ea\"" Nov 5 15:54:37.857205 kubelet[2391]: W1105 15:54:37.857046 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://134.199.212.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 134.199.212.97:6443: connect: connection refused Nov 5 15:54:37.857365 kubelet[2391]: E1105 15:54:37.857223 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://134.199.212.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 134.199.212.97:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:54:37.857412 containerd[1597]: time="2025-11-05T15:54:37.857308451Z" level=info msg="StartContainer for \"9f6d37ddc9b355950aa276569857f5ecce4856d051276abadf737e1490c8c9f4\"" Nov 5 15:54:37.857561 containerd[1597]: time="2025-11-05T15:54:37.857478418Z" level=info msg="StartContainer for \"55aec50ed5e2b7fe8c727c659b4ee86874bcfeac65406117826f2dff424cb0ea\"" Nov 5 15:54:37.858696 containerd[1597]: time="2025-11-05T15:54:37.858633246Z" level=info msg="connecting to shim 55aec50ed5e2b7fe8c727c659b4ee86874bcfeac65406117826f2dff424cb0ea" address="unix:///run/containerd/s/682196d3790848d402be1700c4144964069bc1951f7d25407a969a4268ef9c76" protocol=ttrpc version=3 Nov 5 15:54:37.858796 containerd[1597]: time="2025-11-05T15:54:37.858648811Z" level=info msg="connecting to shim 9f6d37ddc9b355950aa276569857f5ecce4856d051276abadf737e1490c8c9f4" address="unix:///run/containerd/s/ed645a90ee15d6101b2e42575ed3360fb21ba8de041fea82c9010623591c5054" protocol=ttrpc version=3 Nov 5 15:54:37.861755 containerd[1597]: time="2025-11-05T15:54:37.861684856Z" level=info msg="CreateContainer within sandbox \"37dbca68a4b41741d5a99ca84a8852d4b7f85f229b82f45b7a866c6008ba35fa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"026b7215181445b54bebeb4ebd4880ea353c1bb81ed0f3d745c190c8d3c1092f\"" Nov 5 15:54:37.862829 containerd[1597]: time="2025-11-05T15:54:37.862305741Z" level=info msg="StartContainer for \"026b7215181445b54bebeb4ebd4880ea353c1bb81ed0f3d745c190c8d3c1092f\"" Nov 5 15:54:37.865417 containerd[1597]: time="2025-11-05T15:54:37.865382864Z" level=info msg="connecting to shim 026b7215181445b54bebeb4ebd4880ea353c1bb81ed0f3d745c190c8d3c1092f" address="unix:///run/containerd/s/db8c37235141f19789d467884f6ff437b0c544077e1bc2eb30c4f0917fcdb5e5" protocol=ttrpc version=3 Nov 5 15:54:37.880122 kubelet[2391]: W1105 15:54:37.880064 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://134.199.212.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 134.199.212.97:6443: connect: connection refused Nov 5 15:54:37.880386 kubelet[2391]: E1105 15:54:37.880359 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://134.199.212.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 134.199.212.97:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:54:37.883192 systemd[1]: Started cri-containerd-55aec50ed5e2b7fe8c727c659b4ee86874bcfeac65406117826f2dff424cb0ea.scope - libcontainer container 55aec50ed5e2b7fe8c727c659b4ee86874bcfeac65406117826f2dff424cb0ea. Nov 5 15:54:37.897124 systemd[1]: Started cri-containerd-9f6d37ddc9b355950aa276569857f5ecce4856d051276abadf737e1490c8c9f4.scope - libcontainer container 9f6d37ddc9b355950aa276569857f5ecce4856d051276abadf737e1490c8c9f4. Nov 5 15:54:37.908053 systemd[1]: Started cri-containerd-026b7215181445b54bebeb4ebd4880ea353c1bb81ed0f3d745c190c8d3c1092f.scope - libcontainer container 026b7215181445b54bebeb4ebd4880ea353c1bb81ed0f3d745c190c8d3c1092f. Nov 5 15:54:37.958181 kubelet[2391]: W1105 15:54:37.958023 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://134.199.212.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 134.199.212.97:6443: connect: connection refused Nov 5 15:54:37.958181 kubelet[2391]: E1105 15:54:37.958187 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://134.199.212.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 134.199.212.97:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:54:37.996571 containerd[1597]: time="2025-11-05T15:54:37.996461443Z" level=info msg="StartContainer for \"55aec50ed5e2b7fe8c727c659b4ee86874bcfeac65406117826f2dff424cb0ea\" returns successfully" Nov 5 15:54:38.015908 containerd[1597]: time="2025-11-05T15:54:38.015753025Z" level=info msg="StartContainer for \"9f6d37ddc9b355950aa276569857f5ecce4856d051276abadf737e1490c8c9f4\" returns successfully" Nov 5 15:54:38.027548 containerd[1597]: time="2025-11-05T15:54:38.027305674Z" level=info msg="StartContainer for \"026b7215181445b54bebeb4ebd4880ea353c1bb81ed0f3d745c190c8d3c1092f\" returns successfully" Nov 5 15:54:38.052748 kubelet[2391]: E1105 15:54:38.052706 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-e-b20d930803\" not found" node="ci-4487.0.1-e-b20d930803" Nov 5 15:54:38.053179 kubelet[2391]: E1105 15:54:38.053149 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:38.060629 kubelet[2391]: E1105 15:54:38.060593 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-e-b20d930803\" not found" node="ci-4487.0.1-e-b20d930803" Nov 5 15:54:38.061077 kubelet[2391]: E1105 15:54:38.060795 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:38.064186 kubelet[2391]: E1105 15:54:38.064155 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-e-b20d930803\" not found" node="ci-4487.0.1-e-b20d930803" Nov 5 15:54:38.064306 kubelet[2391]: E1105 15:54:38.064288 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:38.590904 kubelet[2391]: I1105 15:54:38.590858 2391 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-e-b20d930803" Nov 5 15:54:39.068076 kubelet[2391]: E1105 15:54:39.067957 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-e-b20d930803\" not found" node="ci-4487.0.1-e-b20d930803" Nov 5 15:54:39.068514 kubelet[2391]: E1105 15:54:39.068489 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-e-b20d930803\" not found" node="ci-4487.0.1-e-b20d930803" Nov 5 15:54:39.068682 kubelet[2391]: E1105 15:54:39.068664 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:39.068854 kubelet[2391]: E1105 15:54:39.068835 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:39.496602 kubelet[2391]: E1105 15:54:39.496486 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-e-b20d930803\" not found" node="ci-4487.0.1-e-b20d930803" Nov 5 15:54:39.496919 kubelet[2391]: E1105 15:54:39.496896 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:40.127112 kubelet[2391]: I1105 15:54:40.126560 2391 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.1-e-b20d930803" Nov 5 15:54:40.162728 kubelet[2391]: E1105 15:54:40.162590 2391 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4487.0.1-e-b20d930803.1875275568d253ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4487.0.1-e-b20d930803,UID:ci-4487.0.1-e-b20d930803,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4487.0.1-e-b20d930803,},FirstTimestamp:2025-11-05 15:54:36.969735086 +0000 UTC m=+0.733135964,LastTimestamp:2025-11-05 15:54:36.969735086 +0000 UTC m=+0.733135964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487.0.1-e-b20d930803,}" Nov 5 15:54:40.195269 kubelet[2391]: I1105 15:54:40.195198 2391 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.1-e-b20d930803" Nov 5 15:54:40.205200 kubelet[2391]: E1105 15:54:40.205140 2391 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.1-e-b20d930803\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4487.0.1-e-b20d930803" Nov 5 15:54:40.205393 kubelet[2391]: I1105 15:54:40.205218 2391 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.1-e-b20d930803" Nov 5 15:54:40.208060 kubelet[2391]: E1105 15:54:40.207768 2391 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4487.0.1-e-b20d930803\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4487.0.1-e-b20d930803" Nov 5 15:54:40.208060 kubelet[2391]: I1105 15:54:40.207822 2391 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.1-e-b20d930803" Nov 5 15:54:40.210366 kubelet[2391]: E1105 15:54:40.210316 2391 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487.0.1-e-b20d930803\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4487.0.1-e-b20d930803" Nov 5 15:54:40.970094 kubelet[2391]: I1105 15:54:40.970030 2391 apiserver.go:52] "Watching apiserver" Nov 5 15:54:40.994865 kubelet[2391]: I1105 15:54:40.994755 2391 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 15:54:42.714396 systemd[1]: Reload requested from client PID 2662 ('systemctl') (unit session-7.scope)... Nov 5 15:54:42.714798 systemd[1]: Reloading... Nov 5 15:54:42.846011 zram_generator::config[2702]: No configuration found. Nov 5 15:54:43.021709 kubelet[2391]: I1105 15:54:43.021568 2391 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.1-e-b20d930803" Nov 5 15:54:43.032975 kubelet[2391]: W1105 15:54:43.032293 2391 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 5 15:54:43.033424 kubelet[2391]: E1105 15:54:43.033382 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:43.077084 kubelet[2391]: E1105 15:54:43.077050 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:43.329332 systemd[1]: Reloading finished in 613 ms. Nov 5 15:54:43.364936 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:54:43.380434 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 15:54:43.380766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:54:43.380861 systemd[1]: kubelet.service: Consumed 1.237s CPU time, 129.1M memory peak. Nov 5 15:54:43.384902 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:54:43.590152 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:54:43.604446 (kubelet)[2757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:54:43.678566 kubelet[2757]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:54:43.678566 kubelet[2757]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:54:43.678566 kubelet[2757]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:54:43.680118 kubelet[2757]: I1105 15:54:43.678718 2757 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:54:43.698855 kubelet[2757]: I1105 15:54:43.698132 2757 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 5 15:54:43.698855 kubelet[2757]: I1105 15:54:43.698185 2757 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:54:43.698855 kubelet[2757]: I1105 15:54:43.698629 2757 server.go:954] "Client rotation is on, will bootstrap in background" Nov 5 15:54:43.705728 kubelet[2757]: I1105 15:54:43.705319 2757 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 5 15:54:43.709771 kubelet[2757]: I1105 15:54:43.709715 2757 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:54:43.720949 kubelet[2757]: I1105 15:54:43.720915 2757 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:54:43.730298 kubelet[2757]: I1105 15:54:43.729831 2757 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 15:54:43.730298 kubelet[2757]: I1105 15:54:43.730179 2757 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:54:43.730597 kubelet[2757]: I1105 15:54:43.730232 2757 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.1-e-b20d930803","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:54:43.730597 kubelet[2757]: I1105 15:54:43.730519 2757 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:54:43.730597 kubelet[2757]: I1105 15:54:43.730536 2757 container_manager_linux.go:304] "Creating device plugin manager" Nov 5 15:54:43.730882 kubelet[2757]: I1105 15:54:43.730615 2757 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:54:43.731859 kubelet[2757]: I1105 15:54:43.730964 2757 kubelet.go:446] "Attempting to sync node with API server" Nov 5 15:54:43.731859 kubelet[2757]: I1105 15:54:43.731127 2757 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:54:43.731859 kubelet[2757]: I1105 15:54:43.731269 2757 kubelet.go:352] "Adding apiserver pod source" Nov 5 15:54:43.731859 kubelet[2757]: I1105 15:54:43.731289 2757 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:54:43.738908 kubelet[2757]: I1105 15:54:43.738746 2757 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:54:43.741239 kubelet[2757]: I1105 15:54:43.741199 2757 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 5 15:54:43.741855 kubelet[2757]: I1105 15:54:43.741835 2757 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 15:54:43.741952 kubelet[2757]: I1105 15:54:43.741873 2757 server.go:1287] "Started kubelet" Nov 5 15:54:43.749533 kubelet[2757]: I1105 15:54:43.748456 2757 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:54:43.778941 kubelet[2757]: I1105 15:54:43.778626 2757 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:54:43.780472 kubelet[2757]: I1105 15:54:43.780254 2757 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:54:43.780765 kubelet[2757]: I1105 15:54:43.780698 2757 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:54:43.781460 kubelet[2757]: I1105 15:54:43.781425 2757 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:54:43.783527 kubelet[2757]: I1105 15:54:43.783053 2757 server.go:479] "Adding debug handlers to kubelet server" Nov 5 15:54:43.790083 kubelet[2757]: I1105 15:54:43.788985 2757 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 15:54:43.790083 kubelet[2757]: E1105 15:54:43.789400 2757 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.1-e-b20d930803\" not found" Nov 5 15:54:43.801859 kubelet[2757]: I1105 15:54:43.801782 2757 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 15:54:43.802062 kubelet[2757]: I1105 15:54:43.802044 2757 reconciler.go:26] "Reconciler: start to sync state" Nov 5 15:54:43.823747 kubelet[2757]: I1105 15:54:43.823650 2757 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 5 15:54:43.829451 kubelet[2757]: I1105 15:54:43.829397 2757 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 5 15:54:43.829451 kubelet[2757]: I1105 15:54:43.829458 2757 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 5 15:54:43.829875 kubelet[2757]: I1105 15:54:43.829508 2757 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:54:43.829875 kubelet[2757]: I1105 15:54:43.829520 2757 kubelet.go:2382] "Starting kubelet main sync loop" Nov 5 15:54:43.829875 kubelet[2757]: E1105 15:54:43.829603 2757 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:54:43.831790 kubelet[2757]: I1105 15:54:43.831762 2757 factory.go:221] Registration of the systemd container factory successfully Nov 5 15:54:43.832104 kubelet[2757]: I1105 15:54:43.832081 2757 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:54:43.843243 kubelet[2757]: I1105 15:54:43.842382 2757 factory.go:221] Registration of the containerd container factory successfully Nov 5 15:54:43.861601 kubelet[2757]: E1105 15:54:43.861502 2757 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 15:54:43.929826 kubelet[2757]: E1105 15:54:43.929770 2757 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 5 15:54:43.975244 kubelet[2757]: I1105 15:54:43.974422 2757 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:54:43.975244 kubelet[2757]: I1105 15:54:43.974443 2757 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:54:43.975244 kubelet[2757]: I1105 15:54:43.974471 2757 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:54:43.975244 kubelet[2757]: I1105 15:54:43.974701 2757 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 15:54:43.975244 kubelet[2757]: I1105 15:54:43.974713 2757 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 15:54:43.975244 kubelet[2757]: I1105 15:54:43.974733 2757 policy_none.go:49] "None policy: Start" Nov 5 15:54:43.975244 kubelet[2757]: I1105 15:54:43.974743 2757 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 15:54:43.975244 kubelet[2757]: I1105 15:54:43.974753 2757 state_mem.go:35] "Initializing new in-memory state store" Nov 5 15:54:43.975244 kubelet[2757]: I1105 15:54:43.975046 2757 state_mem.go:75] "Updated machine memory state" Nov 5 15:54:43.983490 kubelet[2757]: I1105 15:54:43.982264 2757 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 5 15:54:43.983490 kubelet[2757]: I1105 15:54:43.982534 2757 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:54:43.983490 kubelet[2757]: I1105 15:54:43.982554 2757 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:54:43.992070 kubelet[2757]: I1105 15:54:43.987876 2757 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:54:43.995862 kubelet[2757]: E1105 15:54:43.993650 2757 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:54:44.099375 kubelet[2757]: I1105 15:54:44.097994 2757 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-e-b20d930803" Nov 5 15:54:44.110783 kubelet[2757]: I1105 15:54:44.110739 2757 kubelet_node_status.go:124] "Node was previously registered" node="ci-4487.0.1-e-b20d930803" Nov 5 15:54:44.110783 kubelet[2757]: I1105 15:54:44.110887 2757 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.1-e-b20d930803" Nov 5 15:54:44.132051 kubelet[2757]: I1105 15:54:44.131559 2757 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.1-e-b20d930803" Nov 5 15:54:44.132051 kubelet[2757]: I1105 15:54:44.132012 2757 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.1-e-b20d930803" Nov 5 15:54:44.132827 kubelet[2757]: I1105 15:54:44.132762 2757 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.1-e-b20d930803" Nov 5 15:54:44.142021 kubelet[2757]: W1105 15:54:44.141657 2757 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 5 15:54:44.143414 kubelet[2757]: W1105 15:54:44.143171 2757 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 5 15:54:44.146959 kubelet[2757]: W1105 15:54:44.146609 2757 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 5 15:54:44.146959 kubelet[2757]: E1105 15:54:44.146692 2757 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.1-e-b20d930803\" already exists" pod="kube-system/kube-apiserver-ci-4487.0.1-e-b20d930803" Nov 5 15:54:44.204932 kubelet[2757]: I1105 15:54:44.204599 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1c681daedaa39f7c2774b5cfb7eb868-ca-certs\") pod \"kube-apiserver-ci-4487.0.1-e-b20d930803\" (UID: \"c1c681daedaa39f7c2774b5cfb7eb868\") " pod="kube-system/kube-apiserver-ci-4487.0.1-e-b20d930803" Nov 5 15:54:44.205289 kubelet[2757]: I1105 15:54:44.205171 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1c681daedaa39f7c2774b5cfb7eb868-k8s-certs\") pod \"kube-apiserver-ci-4487.0.1-e-b20d930803\" (UID: \"c1c681daedaa39f7c2774b5cfb7eb868\") " pod="kube-system/kube-apiserver-ci-4487.0.1-e-b20d930803" Nov 5 15:54:44.205506 kubelet[2757]: I1105 15:54:44.205261 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9f4291a225dfe7f7ff6ef46274d22413-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.1-e-b20d930803\" (UID: \"9f4291a225dfe7f7ff6ef46274d22413\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-e-b20d930803" Nov 5 15:54:44.205506 kubelet[2757]: I1105 15:54:44.205472 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f4291a225dfe7f7ff6ef46274d22413-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.1-e-b20d930803\" (UID: \"9f4291a225dfe7f7ff6ef46274d22413\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-e-b20d930803" Nov 5 15:54:44.205833 kubelet[2757]: I1105 15:54:44.205596 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1c681daedaa39f7c2774b5cfb7eb868-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.1-e-b20d930803\" (UID: \"c1c681daedaa39f7c2774b5cfb7eb868\") " pod="kube-system/kube-apiserver-ci-4487.0.1-e-b20d930803" Nov 5 15:54:44.205833 kubelet[2757]: I1105 15:54:44.205619 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f4291a225dfe7f7ff6ef46274d22413-ca-certs\") pod \"kube-controller-manager-ci-4487.0.1-e-b20d930803\" (UID: \"9f4291a225dfe7f7ff6ef46274d22413\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-e-b20d930803" Nov 5 15:54:44.205833 kubelet[2757]: I1105 15:54:44.205633 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f4291a225dfe7f7ff6ef46274d22413-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.1-e-b20d930803\" (UID: \"9f4291a225dfe7f7ff6ef46274d22413\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-e-b20d930803" Nov 5 15:54:44.205833 kubelet[2757]: I1105 15:54:44.205658 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f4291a225dfe7f7ff6ef46274d22413-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.1-e-b20d930803\" (UID: \"9f4291a225dfe7f7ff6ef46274d22413\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-e-b20d930803" Nov 5 15:54:44.205833 kubelet[2757]: I1105 15:54:44.205674 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/459f182551f230a41304e585519ce557-kubeconfig\") pod \"kube-scheduler-ci-4487.0.1-e-b20d930803\" (UID: \"459f182551f230a41304e585519ce557\") " pod="kube-system/kube-scheduler-ci-4487.0.1-e-b20d930803" Nov 5 15:54:44.443294 kubelet[2757]: E1105 15:54:44.442742 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:44.446157 kubelet[2757]: E1105 15:54:44.444043 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:44.447197 kubelet[2757]: E1105 15:54:44.447149 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:44.738036 kubelet[2757]: I1105 15:54:44.736135 2757 apiserver.go:52] "Watching apiserver" Nov 5 15:54:44.802952 kubelet[2757]: I1105 15:54:44.802884 2757 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 15:54:44.915536 kubelet[2757]: E1105 15:54:44.915487 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:44.917533 kubelet[2757]: E1105 15:54:44.917496 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:44.919321 kubelet[2757]: E1105 15:54:44.919153 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:44.978458 kubelet[2757]: I1105 15:54:44.978377 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4487.0.1-e-b20d930803" podStartSLOduration=0.978357642 podStartE2EDuration="978.357642ms" podCreationTimestamp="2025-11-05 15:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:54:44.978023646 +0000 UTC m=+1.367072991" watchObservedRunningTime="2025-11-05 15:54:44.978357642 +0000 UTC m=+1.367407001" Nov 5 15:54:45.006894 kubelet[2757]: I1105 15:54:45.006518 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4487.0.1-e-b20d930803" podStartSLOduration=2.006492209 podStartE2EDuration="2.006492209s" podCreationTimestamp="2025-11-05 15:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:54:44.99496134 +0000 UTC m=+1.384010686" watchObservedRunningTime="2025-11-05 15:54:45.006492209 +0000 UTC m=+1.395541557" Nov 5 15:54:45.007329 kubelet[2757]: I1105 15:54:45.007191 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4487.0.1-e-b20d930803" podStartSLOduration=1.007155418 podStartE2EDuration="1.007155418s" podCreationTimestamp="2025-11-05 15:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:54:45.005605453 +0000 UTC m=+1.394654799" watchObservedRunningTime="2025-11-05 15:54:45.007155418 +0000 UTC m=+1.396204764" Nov 5 15:54:45.916628 kubelet[2757]: E1105 15:54:45.916361 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:45.917553 kubelet[2757]: E1105 15:54:45.917255 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:47.442023 kubelet[2757]: I1105 15:54:47.441978 2757 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 15:54:47.443921 containerd[1597]: time="2025-11-05T15:54:47.443857106Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 15:54:47.445403 kubelet[2757]: I1105 15:54:47.444188 2757 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 15:54:47.834037 kubelet[2757]: I1105 15:54:47.833607 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/608e1ae8-e39a-4cb1-b9ac-1c4c64f5597c-kube-proxy\") pod \"kube-proxy-c628k\" (UID: \"608e1ae8-e39a-4cb1-b9ac-1c4c64f5597c\") " pod="kube-system/kube-proxy-c628k" Nov 5 15:54:47.834037 kubelet[2757]: I1105 15:54:47.833656 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/608e1ae8-e39a-4cb1-b9ac-1c4c64f5597c-lib-modules\") pod \"kube-proxy-c628k\" (UID: \"608e1ae8-e39a-4cb1-b9ac-1c4c64f5597c\") " pod="kube-system/kube-proxy-c628k" Nov 5 15:54:47.834037 kubelet[2757]: I1105 15:54:47.833689 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/608e1ae8-e39a-4cb1-b9ac-1c4c64f5597c-xtables-lock\") pod \"kube-proxy-c628k\" (UID: \"608e1ae8-e39a-4cb1-b9ac-1c4c64f5597c\") " pod="kube-system/kube-proxy-c628k" Nov 5 15:54:47.833738 systemd[1]: Created slice kubepods-besteffort-pod608e1ae8_e39a_4cb1_b9ac_1c4c64f5597c.slice - libcontainer container kubepods-besteffort-pod608e1ae8_e39a_4cb1_b9ac_1c4c64f5597c.slice. Nov 5 15:54:47.837123 kubelet[2757]: I1105 15:54:47.834959 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9k8n\" (UniqueName: \"kubernetes.io/projected/608e1ae8-e39a-4cb1-b9ac-1c4c64f5597c-kube-api-access-f9k8n\") pod \"kube-proxy-c628k\" (UID: \"608e1ae8-e39a-4cb1-b9ac-1c4c64f5597c\") " pod="kube-system/kube-proxy-c628k" Nov 5 15:54:47.942905 kubelet[2757]: E1105 15:54:47.942767 2757 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 5 15:54:47.943120 kubelet[2757]: E1105 15:54:47.943101 2757 projected.go:194] Error preparing data for projected volume kube-api-access-f9k8n for pod kube-system/kube-proxy-c628k: configmap "kube-root-ca.crt" not found Nov 5 15:54:47.943298 kubelet[2757]: E1105 15:54:47.943285 2757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/608e1ae8-e39a-4cb1-b9ac-1c4c64f5597c-kube-api-access-f9k8n podName:608e1ae8-e39a-4cb1-b9ac-1c4c64f5597c nodeName:}" failed. No retries permitted until 2025-11-05 15:54:48.443258377 +0000 UTC m=+4.832307714 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-f9k8n" (UniqueName: "kubernetes.io/projected/608e1ae8-e39a-4cb1-b9ac-1c4c64f5597c-kube-api-access-f9k8n") pod "kube-proxy-c628k" (UID: "608e1ae8-e39a-4cb1-b9ac-1c4c64f5597c") : configmap "kube-root-ca.crt" not found Nov 5 15:54:48.568330 systemd[1]: Created slice kubepods-besteffort-podf78587b5_2132_409c_a1ba_fa58abea6490.slice - libcontainer container kubepods-besteffort-podf78587b5_2132_409c_a1ba_fa58abea6490.slice. Nov 5 15:54:48.640302 kubelet[2757]: I1105 15:54:48.640218 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgg82\" (UniqueName: \"kubernetes.io/projected/f78587b5-2132-409c-a1ba-fa58abea6490-kube-api-access-kgg82\") pod \"tigera-operator-7dcd859c48-dzrgs\" (UID: \"f78587b5-2132-409c-a1ba-fa58abea6490\") " pod="tigera-operator/tigera-operator-7dcd859c48-dzrgs" Nov 5 15:54:48.640302 kubelet[2757]: I1105 15:54:48.640307 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f78587b5-2132-409c-a1ba-fa58abea6490-var-lib-calico\") pod \"tigera-operator-7dcd859c48-dzrgs\" (UID: \"f78587b5-2132-409c-a1ba-fa58abea6490\") " pod="tigera-operator/tigera-operator-7dcd859c48-dzrgs" Nov 5 15:54:48.748860 kubelet[2757]: E1105 15:54:48.748371 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:48.751702 containerd[1597]: time="2025-11-05T15:54:48.751653679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c628k,Uid:608e1ae8-e39a-4cb1-b9ac-1c4c64f5597c,Namespace:kube-system,Attempt:0,}" Nov 5 15:54:48.785439 containerd[1597]: time="2025-11-05T15:54:48.785157253Z" level=info msg="connecting to shim eedb09bad213f8fd231b8db04c127e235538e05559c2daa126cfd03242232095" address="unix:///run/containerd/s/9a02fb87f4566c01ed2c199e0d7cde45de84bfef6c7b97785e73fbb43bfd169c" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:54:48.824158 systemd[1]: Started cri-containerd-eedb09bad213f8fd231b8db04c127e235538e05559c2daa126cfd03242232095.scope - libcontainer container eedb09bad213f8fd231b8db04c127e235538e05559c2daa126cfd03242232095. Nov 5 15:54:48.860948 containerd[1597]: time="2025-11-05T15:54:48.860804220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c628k,Uid:608e1ae8-e39a-4cb1-b9ac-1c4c64f5597c,Namespace:kube-system,Attempt:0,} returns sandbox id \"eedb09bad213f8fd231b8db04c127e235538e05559c2daa126cfd03242232095\"" Nov 5 15:54:48.862407 kubelet[2757]: E1105 15:54:48.862365 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:48.867315 containerd[1597]: time="2025-11-05T15:54:48.867260790Z" level=info msg="CreateContainer within sandbox \"eedb09bad213f8fd231b8db04c127e235538e05559c2daa126cfd03242232095\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 15:54:48.874783 containerd[1597]: time="2025-11-05T15:54:48.874614221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-dzrgs,Uid:f78587b5-2132-409c-a1ba-fa58abea6490,Namespace:tigera-operator,Attempt:0,}" Nov 5 15:54:48.885600 containerd[1597]: time="2025-11-05T15:54:48.884621799Z" level=info msg="Container a956c79a7ee36185089b54bb7a70a06ce62b0d09211f6a26915a17c7d1cdef0f: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:54:48.899903 containerd[1597]: time="2025-11-05T15:54:48.899759745Z" level=info msg="CreateContainer within sandbox \"eedb09bad213f8fd231b8db04c127e235538e05559c2daa126cfd03242232095\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a956c79a7ee36185089b54bb7a70a06ce62b0d09211f6a26915a17c7d1cdef0f\"" Nov 5 15:54:48.900740 containerd[1597]: time="2025-11-05T15:54:48.900696199Z" level=info msg="StartContainer for \"a956c79a7ee36185089b54bb7a70a06ce62b0d09211f6a26915a17c7d1cdef0f\"" Nov 5 15:54:48.904840 containerd[1597]: time="2025-11-05T15:54:48.904529283Z" level=info msg="connecting to shim a956c79a7ee36185089b54bb7a70a06ce62b0d09211f6a26915a17c7d1cdef0f" address="unix:///run/containerd/s/9a02fb87f4566c01ed2c199e0d7cde45de84bfef6c7b97785e73fbb43bfd169c" protocol=ttrpc version=3 Nov 5 15:54:48.907165 containerd[1597]: time="2025-11-05T15:54:48.907069033Z" level=info msg="connecting to shim 87af5bfc3d3646bf1f2597eebaf55e6daac6ffada735b1d2a47a0d98a4e786c8" address="unix:///run/containerd/s/ef18188eacfb2a5099234a9644a9b5cbd7dc6fba0efa73f1753b62b2b50c2d94" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:54:48.936193 systemd[1]: Started cri-containerd-a956c79a7ee36185089b54bb7a70a06ce62b0d09211f6a26915a17c7d1cdef0f.scope - libcontainer container a956c79a7ee36185089b54bb7a70a06ce62b0d09211f6a26915a17c7d1cdef0f. Nov 5 15:54:48.961353 systemd[1]: Started cri-containerd-87af5bfc3d3646bf1f2597eebaf55e6daac6ffada735b1d2a47a0d98a4e786c8.scope - libcontainer container 87af5bfc3d3646bf1f2597eebaf55e6daac6ffada735b1d2a47a0d98a4e786c8. Nov 5 15:54:49.028150 containerd[1597]: time="2025-11-05T15:54:49.028034719Z" level=info msg="StartContainer for \"a956c79a7ee36185089b54bb7a70a06ce62b0d09211f6a26915a17c7d1cdef0f\" returns successfully" Nov 5 15:54:49.072284 containerd[1597]: time="2025-11-05T15:54:49.071540833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-dzrgs,Uid:f78587b5-2132-409c-a1ba-fa58abea6490,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"87af5bfc3d3646bf1f2597eebaf55e6daac6ffada735b1d2a47a0d98a4e786c8\"" Nov 5 15:54:49.077389 containerd[1597]: time="2025-11-05T15:54:49.077057012Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 5 15:54:49.080591 systemd-resolved[1288]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Nov 5 15:54:49.552968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3996970790.mount: Deactivated successfully. Nov 5 15:54:49.944837 kubelet[2757]: E1105 15:54:49.944543 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:50.425601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1403793032.mount: Deactivated successfully. Nov 5 15:54:50.947256 kubelet[2757]: E1105 15:54:50.947218 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:51.760021 systemd-timesyncd[1465]: Contacted time server 23.186.168.128:123 (2.flatcar.pool.ntp.org). Nov 5 15:54:51.760564 systemd-timesyncd[1465]: Initial clock synchronization to Wed 2025-11-05 15:54:51.759516 UTC. Nov 5 15:54:51.761324 systemd-resolved[1288]: Clock change detected. Flushing caches. Nov 5 15:54:52.293721 containerd[1597]: time="2025-11-05T15:54:52.293514990Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:52.295121 containerd[1597]: time="2025-11-05T15:54:52.295066721Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 5 15:54:52.295602 containerd[1597]: time="2025-11-05T15:54:52.295555113Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:52.298826 containerd[1597]: time="2025-11-05T15:54:52.298755381Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:52.300401 containerd[1597]: time="2025-11-05T15:54:52.300334372Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.640649696s" Nov 5 15:54:52.300401 containerd[1597]: time="2025-11-05T15:54:52.300377227Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 5 15:54:52.306304 containerd[1597]: time="2025-11-05T15:54:52.305465972Z" level=info msg="CreateContainer within sandbox \"87af5bfc3d3646bf1f2597eebaf55e6daac6ffada735b1d2a47a0d98a4e786c8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 5 15:54:52.317646 containerd[1597]: time="2025-11-05T15:54:52.317593786Z" level=info msg="Container 49338c3beb098d6cc5672d5dc10ab70837c1a3a3a317093a378a6cebe900ebe7: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:54:52.330173 containerd[1597]: time="2025-11-05T15:54:52.330092291Z" level=info msg="CreateContainer within sandbox \"87af5bfc3d3646bf1f2597eebaf55e6daac6ffada735b1d2a47a0d98a4e786c8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"49338c3beb098d6cc5672d5dc10ab70837c1a3a3a317093a378a6cebe900ebe7\"" Nov 5 15:54:52.331700 containerd[1597]: time="2025-11-05T15:54:52.331602683Z" level=info msg="StartContainer for \"49338c3beb098d6cc5672d5dc10ab70837c1a3a3a317093a378a6cebe900ebe7\"" Nov 5 15:54:52.334009 containerd[1597]: time="2025-11-05T15:54:52.333956143Z" level=info msg="connecting to shim 49338c3beb098d6cc5672d5dc10ab70837c1a3a3a317093a378a6cebe900ebe7" address="unix:///run/containerd/s/ef18188eacfb2a5099234a9644a9b5cbd7dc6fba0efa73f1753b62b2b50c2d94" protocol=ttrpc version=3 Nov 5 15:54:52.368902 systemd[1]: Started cri-containerd-49338c3beb098d6cc5672d5dc10ab70837c1a3a3a317093a378a6cebe900ebe7.scope - libcontainer container 49338c3beb098d6cc5672d5dc10ab70837c1a3a3a317093a378a6cebe900ebe7. Nov 5 15:54:52.428322 containerd[1597]: time="2025-11-05T15:54:52.428264376Z" level=info msg="StartContainer for \"49338c3beb098d6cc5672d5dc10ab70837c1a3a3a317093a378a6cebe900ebe7\" returns successfully" Nov 5 15:54:52.550416 kubelet[2757]: E1105 15:54:52.548293 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:52.550416 kubelet[2757]: I1105 15:54:52.550131 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c628k" podStartSLOduration=5.55010572 podStartE2EDuration="5.55010572s" podCreationTimestamp="2025-11-05 15:54:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:54:49.959920085 +0000 UTC m=+6.348969431" watchObservedRunningTime="2025-11-05 15:54:52.55010572 +0000 UTC m=+8.356575931" Nov 5 15:54:52.550416 kubelet[2757]: I1105 15:54:52.550315 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-dzrgs" podStartSLOduration=1.906972777 podStartE2EDuration="4.550303812s" podCreationTimestamp="2025-11-05 15:54:48 +0000 UTC" firstStartedPulling="2025-11-05 15:54:49.075412164 +0000 UTC m=+5.464461490" lastFinishedPulling="2025-11-05 15:54:52.301322321 +0000 UTC m=+8.107792525" observedRunningTime="2025-11-05 15:54:52.547542871 +0000 UTC m=+8.354013083" watchObservedRunningTime="2025-11-05 15:54:52.550303812 +0000 UTC m=+8.356774024" Nov 5 15:54:53.538346 kubelet[2757]: E1105 15:54:53.538298 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:53.940229 kubelet[2757]: E1105 15:54:53.939254 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:54.540282 kubelet[2757]: E1105 15:54:54.540217 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:54.541818 kubelet[2757]: E1105 15:54:54.541778 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:54.723844 kubelet[2757]: E1105 15:54:54.723805 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:55.543484 kubelet[2757]: E1105 15:54:55.543340 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:55.544868 kubelet[2757]: E1105 15:54:55.544828 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:54:57.755432 sudo[1829]: pam_unix(sudo:session): session closed for user root Nov 5 15:54:57.760279 sshd[1828]: Connection closed by 139.178.68.195 port 42754 Nov 5 15:54:57.762584 sshd-session[1825]: pam_unix(sshd:session): session closed for user core Nov 5 15:54:57.769285 systemd[1]: sshd@6-134.199.212.97:22-139.178.68.195:42754.service: Deactivated successfully. Nov 5 15:54:57.774787 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 15:54:57.775002 systemd[1]: session-7.scope: Consumed 5.608s CPU time, 158.6M memory peak. Nov 5 15:54:57.777519 systemd-logind[1561]: Session 7 logged out. Waiting for processes to exit. Nov 5 15:54:57.785572 systemd-logind[1561]: Removed session 7. Nov 5 15:55:01.835707 update_engine[1563]: I20251105 15:55:01.835544 1563 update_attempter.cc:509] Updating boot flags... Nov 5 15:55:05.400821 systemd[1]: Created slice kubepods-besteffort-pod48c916fa_86cc_4d86_be10_486f34e0531a.slice - libcontainer container kubepods-besteffort-pod48c916fa_86cc_4d86_be10_486f34e0531a.slice. Nov 5 15:55:05.436419 kubelet[2757]: I1105 15:55:05.435646 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48c916fa-86cc-4d86-be10-486f34e0531a-tigera-ca-bundle\") pod \"calico-typha-7c47d4578c-b8sxk\" (UID: \"48c916fa-86cc-4d86-be10-486f34e0531a\") " pod="calico-system/calico-typha-7c47d4578c-b8sxk" Nov 5 15:55:05.436419 kubelet[2757]: I1105 15:55:05.435772 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/48c916fa-86cc-4d86-be10-486f34e0531a-typha-certs\") pod \"calico-typha-7c47d4578c-b8sxk\" (UID: \"48c916fa-86cc-4d86-be10-486f34e0531a\") " pod="calico-system/calico-typha-7c47d4578c-b8sxk" Nov 5 15:55:05.436419 kubelet[2757]: I1105 15:55:05.435816 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbmvj\" (UniqueName: \"kubernetes.io/projected/48c916fa-86cc-4d86-be10-486f34e0531a-kube-api-access-wbmvj\") pod \"calico-typha-7c47d4578c-b8sxk\" (UID: \"48c916fa-86cc-4d86-be10-486f34e0531a\") " pod="calico-system/calico-typha-7c47d4578c-b8sxk" Nov 5 15:55:05.604016 systemd[1]: Created slice kubepods-besteffort-pod80636142_a3d2_4e79_81a3_b21107045a40.slice - libcontainer container kubepods-besteffort-pod80636142_a3d2_4e79_81a3_b21107045a40.slice. Nov 5 15:55:05.638455 kubelet[2757]: I1105 15:55:05.638343 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/80636142-a3d2-4e79-81a3-b21107045a40-cni-bin-dir\") pod \"calico-node-x5277\" (UID: \"80636142-a3d2-4e79-81a3-b21107045a40\") " pod="calico-system/calico-node-x5277" Nov 5 15:55:05.638661 kubelet[2757]: I1105 15:55:05.638531 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80636142-a3d2-4e79-81a3-b21107045a40-lib-modules\") pod \"calico-node-x5277\" (UID: \"80636142-a3d2-4e79-81a3-b21107045a40\") " pod="calico-system/calico-node-x5277" Nov 5 15:55:05.638661 kubelet[2757]: I1105 15:55:05.638562 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/80636142-a3d2-4e79-81a3-b21107045a40-node-certs\") pod \"calico-node-x5277\" (UID: \"80636142-a3d2-4e79-81a3-b21107045a40\") " pod="calico-system/calico-node-x5277" Nov 5 15:55:05.638661 kubelet[2757]: I1105 15:55:05.638588 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/80636142-a3d2-4e79-81a3-b21107045a40-var-lib-calico\") pod \"calico-node-x5277\" (UID: \"80636142-a3d2-4e79-81a3-b21107045a40\") " pod="calico-system/calico-node-x5277" Nov 5 15:55:05.638661 kubelet[2757]: I1105 15:55:05.638629 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/80636142-a3d2-4e79-81a3-b21107045a40-var-run-calico\") pod \"calico-node-x5277\" (UID: \"80636142-a3d2-4e79-81a3-b21107045a40\") " pod="calico-system/calico-node-x5277" Nov 5 15:55:05.638661 kubelet[2757]: I1105 15:55:05.638656 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80636142-a3d2-4e79-81a3-b21107045a40-tigera-ca-bundle\") pod \"calico-node-x5277\" (UID: \"80636142-a3d2-4e79-81a3-b21107045a40\") " pod="calico-system/calico-node-x5277" Nov 5 15:55:05.638917 kubelet[2757]: I1105 15:55:05.638683 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/80636142-a3d2-4e79-81a3-b21107045a40-cni-log-dir\") pod \"calico-node-x5277\" (UID: \"80636142-a3d2-4e79-81a3-b21107045a40\") " pod="calico-system/calico-node-x5277" Nov 5 15:55:05.638917 kubelet[2757]: I1105 15:55:05.638719 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/80636142-a3d2-4e79-81a3-b21107045a40-cni-net-dir\") pod \"calico-node-x5277\" (UID: \"80636142-a3d2-4e79-81a3-b21107045a40\") " pod="calico-system/calico-node-x5277" Nov 5 15:55:05.638917 kubelet[2757]: I1105 15:55:05.638747 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/80636142-a3d2-4e79-81a3-b21107045a40-flexvol-driver-host\") pod \"calico-node-x5277\" (UID: \"80636142-a3d2-4e79-81a3-b21107045a40\") " pod="calico-system/calico-node-x5277" Nov 5 15:55:05.638917 kubelet[2757]: I1105 15:55:05.638773 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80636142-a3d2-4e79-81a3-b21107045a40-xtables-lock\") pod \"calico-node-x5277\" (UID: \"80636142-a3d2-4e79-81a3-b21107045a40\") " pod="calico-system/calico-node-x5277" Nov 5 15:55:05.638917 kubelet[2757]: I1105 15:55:05.638801 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq57z\" (UniqueName: \"kubernetes.io/projected/80636142-a3d2-4e79-81a3-b21107045a40-kube-api-access-sq57z\") pod \"calico-node-x5277\" (UID: \"80636142-a3d2-4e79-81a3-b21107045a40\") " pod="calico-system/calico-node-x5277" Nov 5 15:55:05.639162 kubelet[2757]: I1105 15:55:05.638828 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/80636142-a3d2-4e79-81a3-b21107045a40-policysync\") pod \"calico-node-x5277\" (UID: \"80636142-a3d2-4e79-81a3-b21107045a40\") " pod="calico-system/calico-node-x5277" Nov 5 15:55:05.698770 kubelet[2757]: E1105 15:55:05.698523 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5fjkv" podUID="ca782dd5-c75b-4c0f-9e74-4db41ed6ac62" Nov 5 15:55:05.707014 kubelet[2757]: E1105 15:55:05.706621 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:05.707340 containerd[1597]: time="2025-11-05T15:55:05.707288754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c47d4578c-b8sxk,Uid:48c916fa-86cc-4d86-be10-486f34e0531a,Namespace:calico-system,Attempt:0,}" Nov 5 15:55:05.739409 kubelet[2757]: I1105 15:55:05.739207 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ca782dd5-c75b-4c0f-9e74-4db41ed6ac62-socket-dir\") pod \"csi-node-driver-5fjkv\" (UID: \"ca782dd5-c75b-4c0f-9e74-4db41ed6ac62\") " pod="calico-system/csi-node-driver-5fjkv" Nov 5 15:55:05.740851 kubelet[2757]: I1105 15:55:05.740703 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mqg4\" (UniqueName: \"kubernetes.io/projected/ca782dd5-c75b-4c0f-9e74-4db41ed6ac62-kube-api-access-7mqg4\") pod \"csi-node-driver-5fjkv\" (UID: \"ca782dd5-c75b-4c0f-9e74-4db41ed6ac62\") " pod="calico-system/csi-node-driver-5fjkv" Nov 5 15:55:05.741557 kubelet[2757]: I1105 15:55:05.741507 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ca782dd5-c75b-4c0f-9e74-4db41ed6ac62-kubelet-dir\") pod \"csi-node-driver-5fjkv\" (UID: \"ca782dd5-c75b-4c0f-9e74-4db41ed6ac62\") " pod="calico-system/csi-node-driver-5fjkv" Nov 5 15:55:05.744144 kubelet[2757]: I1105 15:55:05.742236 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ca782dd5-c75b-4c0f-9e74-4db41ed6ac62-registration-dir\") pod \"csi-node-driver-5fjkv\" (UID: \"ca782dd5-c75b-4c0f-9e74-4db41ed6ac62\") " pod="calico-system/csi-node-driver-5fjkv" Nov 5 15:55:05.744144 kubelet[2757]: I1105 15:55:05.742502 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ca782dd5-c75b-4c0f-9e74-4db41ed6ac62-varrun\") pod \"csi-node-driver-5fjkv\" (UID: \"ca782dd5-c75b-4c0f-9e74-4db41ed6ac62\") " pod="calico-system/csi-node-driver-5fjkv" Nov 5 15:55:05.762509 containerd[1597]: time="2025-11-05T15:55:05.762444294Z" level=info msg="connecting to shim ed36b141e359438a47257626ecab726bf71078acd529feb0a5704507620c69e3" address="unix:///run/containerd/s/c2a66d51840ea9e38d657d9aa8db40712f0029d505bf7f793b184b7cd61c1765" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:55:05.767691 kubelet[2757]: E1105 15:55:05.767533 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:05.767961 kubelet[2757]: W1105 15:55:05.767932 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:05.771402 kubelet[2757]: E1105 15:55:05.771067 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:05.791862 kubelet[2757]: E1105 15:55:05.791764 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:05.791862 kubelet[2757]: W1105 15:55:05.791800 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:05.791862 kubelet[2757]: E1105 15:55:05.791834 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:05.826932 systemd[1]: Started cri-containerd-ed36b141e359438a47257626ecab726bf71078acd529feb0a5704507620c69e3.scope - libcontainer container ed36b141e359438a47257626ecab726bf71078acd529feb0a5704507620c69e3. Nov 5 15:55:05.844242 kubelet[2757]: E1105 15:55:05.844186 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:05.844575 kubelet[2757]: W1105 15:55:05.844555 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:05.844691 kubelet[2757]: E1105 15:55:05.844677 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:05.845121 kubelet[2757]: E1105 15:55:05.845100 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:05.845294 kubelet[2757]: W1105 15:55:05.845282 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:05.845429 kubelet[2757]: E1105 15:55:05.845416 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:05.845758 kubelet[2757]: E1105 15:55:05.845718 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:05.845804 kubelet[2757]: W1105 15:55:05.845759 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:05.846263 kubelet[2757]: E1105 15:55:05.846229 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:05.847096 kubelet[2757]: E1105 15:55:05.847068 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:05.847096 kubelet[2757]: W1105 15:55:05.847094 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:05.847307 kubelet[2757]: E1105 15:55:05.847121 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:05.847791 kubelet[2757]: E1105 15:55:05.847766 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:05.847841 kubelet[2757]: W1105 15:55:05.847792 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:05.848973 kubelet[2757]: E1105 15:55:05.848944 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:05.848973 kubelet[2757]: W1105 15:55:05.848969 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:05.849654 kubelet[2757]: E1105 15:55:05.849620 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:05.849731 kubelet[2757]: E1105 15:55:05.849711 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:05.850556 kubelet[2757]: E1105 15:55:05.850526 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:05.850556 kubelet[2757]: W1105 15:55:05.850548 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:05.850709 kubelet[2757]: E1105 15:55:05.850675 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:05.851122 kubelet[2757]: E1105 15:55:05.851102 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:05.851122 kubelet[2757]: W1105 15:55:05.851121 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:05.851528 kubelet[2757]: E1105 15:55:05.851507 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:05.853159 kubelet[2757]: E1105 15:55:05.853102 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:05.853159 kubelet[2757]: W1105 15:55:05.853121 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:05.853461 kubelet[2757]: E1105 15:55:05.853233 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:05.853676 kubelet[2757]: E1105 15:55:05.853622 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:05.853676 kubelet[2757]: W1105 15:55:05.853640 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:05.854496 kubelet[2757]: E1105 15:55:05.854458 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:05.854784 kubelet[2757]: E1105 15:55:05.854558 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:05.854784 kubelet[2757]: W1105 15:55:05.854570 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:05.854784 kubelet[2757]: E1105 15:55:05.854620 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:05.855111 kubelet[2757]: E1105 15:55:05.854794 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:05.855111 kubelet[2757]: W1105 15:55:05.854804 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:05.856518 kubelet[2757]: E1105 15:55:05.856438 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:05.856826 kubelet[2757]: E1105 15:55:05.856801 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:05.856900 kubelet[2757]: W1105 15:55:05.856825 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:05.858044 kubelet[2757]: E1105 15:55:05.857824 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:05.858044 kubelet[2757]: W1105 15:55:05.857851 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:05.858154 kubelet[2757]: E1105 15:55:05.858104 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:05.858154 kubelet[2757]: W1105 15:55:05.858117 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:05.858726 kubelet[2757]: E1105 15:55:05.858651 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:05.858726 kubelet[2757]: W1105 15:55:05.858674 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:05.864935 kubelet[2757]: E1105 15:55:05.864879 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:05.864935 kubelet[2757]: E1105 15:55:05.864935 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:05.865124 kubelet[2757]: E1105 15:55:05.864952 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:05.865124 kubelet[2757]: E1105 15:55:05.864966 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:05.869999 kubelet[2757]: E1105 15:55:05.869948 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:05.869999 kubelet[2757]: W1105 15:55:05.869988 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:05.870174 kubelet[2757]: E1105 15:55:05.870029 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:05.871775 kubelet[2757]: E1105 15:55:05.871733 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:05.871775 kubelet[2757]: W1105 15:55:05.871769 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:05.871898 kubelet[2757]: E1105 15:55:05.871805 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:05.872112 kubelet[2757]: E1105 15:55:05.872093 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:05.872166 kubelet[2757]: W1105 15:55:05.872112 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:05.872229 kubelet[2757]: E1105 15:55:05.872208 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:05.872757 kubelet[2757]: E1105 15:55:05.872733 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:05.872757 kubelet[2757]: W1105 15:55:05.872753 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:05.872946 kubelet[2757]: E1105 15:55:05.872868 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:05.873734 kubelet[2757]: E1105 15:55:05.873704 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:05.873734 kubelet[2757]: W1105 15:55:05.873730 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:05.873889 kubelet[2757]: E1105 15:55:05.873834 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:05.874684 kubelet[2757]: E1105 15:55:05.874655 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:05.874684 kubelet[2757]: W1105 15:55:05.874676 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:05.875356 kubelet[2757]: E1105 15:55:05.874831 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:05.875462 kubelet[2757]: E1105 15:55:05.875444 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:05.875496 kubelet[2757]: W1105 15:55:05.875464 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:05.875791 kubelet[2757]: E1105 15:55:05.875766 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:05.875985 kubelet[2757]: E1105 15:55:05.875966 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:05.876023 kubelet[2757]: W1105 15:55:05.875985 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:05.876717 kubelet[2757]: E1105 15:55:05.876692 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:05.876983 kubelet[2757]: E1105 15:55:05.876966 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:05.877023 kubelet[2757]: W1105 15:55:05.876984 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:05.877023 kubelet[2757]: E1105 15:55:05.877001 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:05.908610 kubelet[2757]: E1105 15:55:05.908446 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:05.908610 kubelet[2757]: W1105 15:55:05.908535 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:05.910727 kubelet[2757]: E1105 15:55:05.909576 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:05.911113 kubelet[2757]: E1105 15:55:05.910898 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:05.911195 containerd[1597]: time="2025-11-05T15:55:05.910372958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x5277,Uid:80636142-a3d2-4e79-81a3-b21107045a40,Namespace:calico-system,Attempt:0,}" Nov 5 15:55:05.929735 containerd[1597]: time="2025-11-05T15:55:05.929673700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c47d4578c-b8sxk,Uid:48c916fa-86cc-4d86-be10-486f34e0531a,Namespace:calico-system,Attempt:0,} returns sandbox id \"ed36b141e359438a47257626ecab726bf71078acd529feb0a5704507620c69e3\"" Nov 5 15:55:05.930742 kubelet[2757]: E1105 15:55:05.930711 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:05.931882 containerd[1597]: time="2025-11-05T15:55:05.931795254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 5 15:55:05.952225 containerd[1597]: time="2025-11-05T15:55:05.951369309Z" level=info msg="connecting to shim cd4a6e3a793573fbe5b2e59174db9f517c63e03fb6cd9a5fd3ff8deb7f4c0637" address="unix:///run/containerd/s/89dd5de62f908c82ad51551a84f5d6ec425614965fc158153ac13cf16bf66c85" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:55:05.990714 systemd[1]: Started cri-containerd-cd4a6e3a793573fbe5b2e59174db9f517c63e03fb6cd9a5fd3ff8deb7f4c0637.scope - libcontainer container cd4a6e3a793573fbe5b2e59174db9f517c63e03fb6cd9a5fd3ff8deb7f4c0637. Nov 5 15:55:06.037348 containerd[1597]: time="2025-11-05T15:55:06.037288408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x5277,Uid:80636142-a3d2-4e79-81a3-b21107045a40,Namespace:calico-system,Attempt:0,} returns sandbox id \"cd4a6e3a793573fbe5b2e59174db9f517c63e03fb6cd9a5fd3ff8deb7f4c0637\"" Nov 5 15:55:06.038708 kubelet[2757]: E1105 15:55:06.038673 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:07.413167 kubelet[2757]: E1105 15:55:07.413057 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5fjkv" podUID="ca782dd5-c75b-4c0f-9e74-4db41ed6ac62" Nov 5 15:55:07.667182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2382587374.mount: Deactivated successfully. Nov 5 15:55:08.786274 containerd[1597]: time="2025-11-05T15:55:08.786212219Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:55:08.787309 containerd[1597]: time="2025-11-05T15:55:08.787270366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 5 15:55:08.787642 containerd[1597]: time="2025-11-05T15:55:08.787611196Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:55:08.803301 containerd[1597]: time="2025-11-05T15:55:08.803230436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:55:08.804328 containerd[1597]: time="2025-11-05T15:55:08.804271009Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.872429432s" Nov 5 15:55:08.804328 containerd[1597]: time="2025-11-05T15:55:08.804308093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 5 15:55:08.805649 containerd[1597]: time="2025-11-05T15:55:08.805491689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 5 15:55:08.827075 containerd[1597]: time="2025-11-05T15:55:08.827035018Z" level=info msg="CreateContainer within sandbox \"ed36b141e359438a47257626ecab726bf71078acd529feb0a5704507620c69e3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 5 15:55:08.836678 containerd[1597]: time="2025-11-05T15:55:08.836627505Z" level=info msg="Container 2c39fb57357b02f83b24a5da9a45c2f0ea3eede568db8b90709b050976b5b1b1: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:55:08.845286 containerd[1597]: time="2025-11-05T15:55:08.845218079Z" level=info msg="CreateContainer within sandbox \"ed36b141e359438a47257626ecab726bf71078acd529feb0a5704507620c69e3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2c39fb57357b02f83b24a5da9a45c2f0ea3eede568db8b90709b050976b5b1b1\"" Nov 5 15:55:08.846041 containerd[1597]: time="2025-11-05T15:55:08.846009975Z" level=info msg="StartContainer for \"2c39fb57357b02f83b24a5da9a45c2f0ea3eede568db8b90709b050976b5b1b1\"" Nov 5 15:55:08.848023 containerd[1597]: time="2025-11-05T15:55:08.847215061Z" level=info msg="connecting to shim 2c39fb57357b02f83b24a5da9a45c2f0ea3eede568db8b90709b050976b5b1b1" address="unix:///run/containerd/s/c2a66d51840ea9e38d657d9aa8db40712f0029d505bf7f793b184b7cd61c1765" protocol=ttrpc version=3 Nov 5 15:55:08.878696 systemd[1]: Started cri-containerd-2c39fb57357b02f83b24a5da9a45c2f0ea3eede568db8b90709b050976b5b1b1.scope - libcontainer container 2c39fb57357b02f83b24a5da9a45c2f0ea3eede568db8b90709b050976b5b1b1. Nov 5 15:55:08.945467 containerd[1597]: time="2025-11-05T15:55:08.945290322Z" level=info msg="StartContainer for \"2c39fb57357b02f83b24a5da9a45c2f0ea3eede568db8b90709b050976b5b1b1\" returns successfully" Nov 5 15:55:09.413443 kubelet[2757]: E1105 15:55:09.412663 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5fjkv" podUID="ca782dd5-c75b-4c0f-9e74-4db41ed6ac62" Nov 5 15:55:09.646107 kubelet[2757]: E1105 15:55:09.645771 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:09.653021 kubelet[2757]: E1105 15:55:09.652989 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.653273 kubelet[2757]: W1105 15:55:09.653203 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.653273 kubelet[2757]: E1105 15:55:09.653231 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.653732 kubelet[2757]: E1105 15:55:09.653688 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.653732 kubelet[2757]: W1105 15:55:09.653702 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.653942 kubelet[2757]: E1105 15:55:09.653809 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.654146 kubelet[2757]: E1105 15:55:09.654135 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.654280 kubelet[2757]: W1105 15:55:09.654207 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.654280 kubelet[2757]: E1105 15:55:09.654223 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.654618 kubelet[2757]: E1105 15:55:09.654606 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.654830 kubelet[2757]: W1105 15:55:09.654692 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.654830 kubelet[2757]: E1105 15:55:09.654722 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.655198 kubelet[2757]: E1105 15:55:09.655123 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.655198 kubelet[2757]: W1105 15:55:09.655136 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.655198 kubelet[2757]: E1105 15:55:09.655149 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.655676 kubelet[2757]: E1105 15:55:09.655598 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.655676 kubelet[2757]: W1105 15:55:09.655626 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.655676 kubelet[2757]: E1105 15:55:09.655637 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.656218 kubelet[2757]: E1105 15:55:09.656124 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.656218 kubelet[2757]: W1105 15:55:09.656138 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.656218 kubelet[2757]: E1105 15:55:09.656153 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.656600 kubelet[2757]: E1105 15:55:09.656519 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.656600 kubelet[2757]: W1105 15:55:09.656531 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.656600 kubelet[2757]: E1105 15:55:09.656542 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.656951 kubelet[2757]: E1105 15:55:09.656888 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.656951 kubelet[2757]: W1105 15:55:09.656899 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.656951 kubelet[2757]: E1105 15:55:09.656913 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.657404 kubelet[2757]: E1105 15:55:09.657303 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.657404 kubelet[2757]: W1105 15:55:09.657331 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.657404 kubelet[2757]: E1105 15:55:09.657348 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.657856 kubelet[2757]: E1105 15:55:09.657739 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.657856 kubelet[2757]: W1105 15:55:09.657750 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.657856 kubelet[2757]: E1105 15:55:09.657763 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.658441 kubelet[2757]: E1105 15:55:09.658204 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.658441 kubelet[2757]: W1105 15:55:09.658336 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.658441 kubelet[2757]: E1105 15:55:09.658351 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.658859 kubelet[2757]: E1105 15:55:09.658845 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.658983 kubelet[2757]: W1105 15:55:09.658915 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.658983 kubelet[2757]: E1105 15:55:09.658929 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.659269 kubelet[2757]: E1105 15:55:09.659210 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.659269 kubelet[2757]: W1105 15:55:09.659221 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.659269 kubelet[2757]: E1105 15:55:09.659230 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.659776 kubelet[2757]: E1105 15:55:09.659671 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.659776 kubelet[2757]: W1105 15:55:09.659686 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.659776 kubelet[2757]: E1105 15:55:09.659701 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.689075 kubelet[2757]: E1105 15:55:09.687874 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.689075 kubelet[2757]: W1105 15:55:09.687903 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.689075 kubelet[2757]: E1105 15:55:09.687990 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.689075 kubelet[2757]: E1105 15:55:09.688673 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.689075 kubelet[2757]: W1105 15:55:09.688692 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.689075 kubelet[2757]: E1105 15:55:09.688714 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.689075 kubelet[2757]: E1105 15:55:09.688961 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.689075 kubelet[2757]: W1105 15:55:09.688970 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.689075 kubelet[2757]: E1105 15:55:09.688985 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.690360 kubelet[2757]: E1105 15:55:09.689186 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.690360 kubelet[2757]: W1105 15:55:09.689197 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.690360 kubelet[2757]: E1105 15:55:09.689211 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.690360 kubelet[2757]: E1105 15:55:09.689482 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.690360 kubelet[2757]: W1105 15:55:09.689494 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.690360 kubelet[2757]: E1105 15:55:09.689508 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.690360 kubelet[2757]: E1105 15:55:09.689660 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.690360 kubelet[2757]: W1105 15:55:09.689667 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.690360 kubelet[2757]: E1105 15:55:09.689675 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.690360 kubelet[2757]: E1105 15:55:09.689827 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.691021 kubelet[2757]: W1105 15:55:09.689833 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.691021 kubelet[2757]: E1105 15:55:09.689841 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.691021 kubelet[2757]: E1105 15:55:09.690673 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.691021 kubelet[2757]: W1105 15:55:09.690695 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.691021 kubelet[2757]: E1105 15:55:09.690732 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.691565 kubelet[2757]: E1105 15:55:09.691546 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.691737 kubelet[2757]: W1105 15:55:09.691650 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.691737 kubelet[2757]: E1105 15:55:09.691681 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.692053 kubelet[2757]: E1105 15:55:09.691997 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.692053 kubelet[2757]: W1105 15:55:09.692009 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.692053 kubelet[2757]: E1105 15:55:09.692037 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.692346 kubelet[2757]: E1105 15:55:09.692334 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.692501 kubelet[2757]: W1105 15:55:09.692440 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.692501 kubelet[2757]: E1105 15:55:09.692475 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.692889 kubelet[2757]: E1105 15:55:09.692821 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.692889 kubelet[2757]: W1105 15:55:09.692833 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.692889 kubelet[2757]: E1105 15:55:09.692856 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.693220 kubelet[2757]: E1105 15:55:09.693210 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.693363 kubelet[2757]: W1105 15:55:09.693281 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.693449 kubelet[2757]: E1105 15:55:09.693304 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.693702 kubelet[2757]: E1105 15:55:09.693648 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.693702 kubelet[2757]: W1105 15:55:09.693667 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.693702 kubelet[2757]: E1105 15:55:09.693680 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.693916 kubelet[2757]: E1105 15:55:09.693852 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.693916 kubelet[2757]: W1105 15:55:09.693859 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.693916 kubelet[2757]: E1105 15:55:09.693873 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.694053 kubelet[2757]: E1105 15:55:09.694040 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.694086 kubelet[2757]: W1105 15:55:09.694053 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.694086 kubelet[2757]: E1105 15:55:09.694071 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.694344 kubelet[2757]: E1105 15:55:09.694314 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.694344 kubelet[2757]: W1105 15:55:09.694329 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.694573 kubelet[2757]: E1105 15:55:09.694523 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:09.694863 kubelet[2757]: E1105 15:55:09.694842 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:55:09.694974 kubelet[2757]: W1105 15:55:09.694919 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:55:09.694974 kubelet[2757]: E1105 15:55:09.694951 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:55:10.343411 containerd[1597]: time="2025-11-05T15:55:10.343295282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:55:10.344922 containerd[1597]: time="2025-11-05T15:55:10.344882248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 5 15:55:10.346414 containerd[1597]: time="2025-11-05T15:55:10.345986506Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:55:10.356174 containerd[1597]: time="2025-11-05T15:55:10.356122363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:55:10.357148 containerd[1597]: time="2025-11-05T15:55:10.357104742Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.551572617s" Nov 5 15:55:10.357358 containerd[1597]: time="2025-11-05T15:55:10.357289412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 5 15:55:10.361104 containerd[1597]: time="2025-11-05T15:55:10.361059567Z" level=info msg="CreateContainer within sandbox \"cd4a6e3a793573fbe5b2e59174db9f517c63e03fb6cd9a5fd3ff8deb7f4c0637\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 5 15:55:10.374500 containerd[1597]: time="2025-11-05T15:55:10.374444448Z" level=info msg="Container 584d00fa05538c00672498635eb8ab52afa061bf6b0a3043ffcc5e47354137bc: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:55:10.391134 containerd[1597]: time="2025-11-05T15:55:10.391010237Z" level=info msg="CreateContainer within sandbox \"cd4a6e3a793573fbe5b2e59174db9f517c63e03fb6cd9a5fd3ff8deb7f4c0637\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"584d00fa05538c00672498635eb8ab52afa061bf6b0a3043ffcc5e47354137bc\"" Nov 5 15:55:10.392011 containerd[1597]: time="2025-11-05T15:55:10.391820609Z" level=info msg="StartContainer for \"584d00fa05538c00672498635eb8ab52afa061bf6b0a3043ffcc5e47354137bc\"" Nov 5 15:55:10.393900 containerd[1597]: time="2025-11-05T15:55:10.393840108Z" level=info msg="connecting to shim 584d00fa05538c00672498635eb8ab52afa061bf6b0a3043ffcc5e47354137bc" address="unix:///run/containerd/s/89dd5de62f908c82ad51551a84f5d6ec425614965fc158153ac13cf16bf66c85" protocol=ttrpc version=3 Nov 5 15:55:10.422636 systemd[1]: Started cri-containerd-584d00fa05538c00672498635eb8ab52afa061bf6b0a3043ffcc5e47354137bc.scope - libcontainer container 584d00fa05538c00672498635eb8ab52afa061bf6b0a3043ffcc5e47354137bc. Nov 5 15:55:10.473314 containerd[1597]: time="2025-11-05T15:55:10.473259106Z" level=info msg="StartContainer for \"584d00fa05538c00672498635eb8ab52afa061bf6b0a3043ffcc5e47354137bc\" returns successfully" Nov 5 15:55:10.488379 systemd[1]: cri-containerd-584d00fa05538c00672498635eb8ab52afa061bf6b0a3043ffcc5e47354137bc.scope: Deactivated successfully. Nov 5 15:55:10.515723 containerd[1597]: time="2025-11-05T15:55:10.515458245Z" level=info msg="received exit event container_id:\"584d00fa05538c00672498635eb8ab52afa061bf6b0a3043ffcc5e47354137bc\" id:\"584d00fa05538c00672498635eb8ab52afa061bf6b0a3043ffcc5e47354137bc\" pid:3394 exited_at:{seconds:1762358110 nanos:489985531}" Nov 5 15:55:10.531155 containerd[1597]: time="2025-11-05T15:55:10.531080604Z" level=info msg="TaskExit event in podsandbox handler container_id:\"584d00fa05538c00672498635eb8ab52afa061bf6b0a3043ffcc5e47354137bc\" id:\"584d00fa05538c00672498635eb8ab52afa061bf6b0a3043ffcc5e47354137bc\" pid:3394 exited_at:{seconds:1762358110 nanos:489985531}" Nov 5 15:55:10.574283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-584d00fa05538c00672498635eb8ab52afa061bf6b0a3043ffcc5e47354137bc-rootfs.mount: Deactivated successfully. Nov 5 15:55:10.654704 kubelet[2757]: I1105 15:55:10.654590 2757 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 15:55:10.657590 kubelet[2757]: E1105 15:55:10.655961 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:10.659168 kubelet[2757]: E1105 15:55:10.659094 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:10.660399 containerd[1597]: time="2025-11-05T15:55:10.660251109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 5 15:55:10.682240 kubelet[2757]: I1105 15:55:10.681747 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c47d4578c-b8sxk" podStartSLOduration=2.808191486 podStartE2EDuration="5.681724104s" podCreationTimestamp="2025-11-05 15:55:05 +0000 UTC" firstStartedPulling="2025-11-05 15:55:05.931494616 +0000 UTC m=+21.737964807" lastFinishedPulling="2025-11-05 15:55:08.805027217 +0000 UTC m=+24.611497425" observedRunningTime="2025-11-05 15:55:09.663085065 +0000 UTC m=+25.469555276" watchObservedRunningTime="2025-11-05 15:55:10.681724104 +0000 UTC m=+26.488194315" Nov 5 15:55:11.413007 kubelet[2757]: E1105 15:55:11.412936 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5fjkv" podUID="ca782dd5-c75b-4c0f-9e74-4db41ed6ac62" Nov 5 15:55:12.035174 kubelet[2757]: I1105 15:55:12.033551 2757 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 15:55:12.035174 kubelet[2757]: E1105 15:55:12.033919 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:12.659508 kubelet[2757]: E1105 15:55:12.659466 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:13.412918 kubelet[2757]: E1105 15:55:13.412821 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5fjkv" podUID="ca782dd5-c75b-4c0f-9e74-4db41ed6ac62" Nov 5 15:55:14.997470 containerd[1597]: time="2025-11-05T15:55:14.996581860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:55:14.997470 containerd[1597]: time="2025-11-05T15:55:14.997409897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 5 15:55:14.998632 containerd[1597]: time="2025-11-05T15:55:14.998599268Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:55:15.001435 containerd[1597]: time="2025-11-05T15:55:15.001260342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:55:15.003476 containerd[1597]: time="2025-11-05T15:55:15.003342471Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.342681586s" Nov 5 15:55:15.003476 containerd[1597]: time="2025-11-05T15:55:15.003448411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 5 15:55:15.008379 containerd[1597]: time="2025-11-05T15:55:15.007285267Z" level=info msg="CreateContainer within sandbox \"cd4a6e3a793573fbe5b2e59174db9f517c63e03fb6cd9a5fd3ff8deb7f4c0637\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 5 15:55:15.019843 containerd[1597]: time="2025-11-05T15:55:15.019780555Z" level=info msg="Container fd10db9ee566b4f44cd1f10001b72bbb40272c5841dbae24ec7eca9e81c44e5e: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:55:15.023181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1859072269.mount: Deactivated successfully. Nov 5 15:55:15.036102 containerd[1597]: time="2025-11-05T15:55:15.035925351Z" level=info msg="CreateContainer within sandbox \"cd4a6e3a793573fbe5b2e59174db9f517c63e03fb6cd9a5fd3ff8deb7f4c0637\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fd10db9ee566b4f44cd1f10001b72bbb40272c5841dbae24ec7eca9e81c44e5e\"" Nov 5 15:55:15.038579 containerd[1597]: time="2025-11-05T15:55:15.037712964Z" level=info msg="StartContainer for \"fd10db9ee566b4f44cd1f10001b72bbb40272c5841dbae24ec7eca9e81c44e5e\"" Nov 5 15:55:15.039813 containerd[1597]: time="2025-11-05T15:55:15.039780390Z" level=info msg="connecting to shim fd10db9ee566b4f44cd1f10001b72bbb40272c5841dbae24ec7eca9e81c44e5e" address="unix:///run/containerd/s/89dd5de62f908c82ad51551a84f5d6ec425614965fc158153ac13cf16bf66c85" protocol=ttrpc version=3 Nov 5 15:55:15.066604 systemd[1]: Started cri-containerd-fd10db9ee566b4f44cd1f10001b72bbb40272c5841dbae24ec7eca9e81c44e5e.scope - libcontainer container fd10db9ee566b4f44cd1f10001b72bbb40272c5841dbae24ec7eca9e81c44e5e. Nov 5 15:55:15.121334 containerd[1597]: time="2025-11-05T15:55:15.121269481Z" level=info msg="StartContainer for \"fd10db9ee566b4f44cd1f10001b72bbb40272c5841dbae24ec7eca9e81c44e5e\" returns successfully" Nov 5 15:55:15.412808 kubelet[2757]: E1105 15:55:15.412631 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5fjkv" podUID="ca782dd5-c75b-4c0f-9e74-4db41ed6ac62" Nov 5 15:55:15.676202 kubelet[2757]: E1105 15:55:15.675139 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:15.845588 systemd[1]: cri-containerd-fd10db9ee566b4f44cd1f10001b72bbb40272c5841dbae24ec7eca9e81c44e5e.scope: Deactivated successfully. Nov 5 15:55:15.845852 systemd[1]: cri-containerd-fd10db9ee566b4f44cd1f10001b72bbb40272c5841dbae24ec7eca9e81c44e5e.scope: Consumed 667ms CPU time, 163.3M memory peak, 4.6M read from disk, 171.3M written to disk. Nov 5 15:55:15.917413 containerd[1597]: time="2025-11-05T15:55:15.917320385Z" level=info msg="received exit event container_id:\"fd10db9ee566b4f44cd1f10001b72bbb40272c5841dbae24ec7eca9e81c44e5e\" id:\"fd10db9ee566b4f44cd1f10001b72bbb40272c5841dbae24ec7eca9e81c44e5e\" pid:3454 exited_at:{seconds:1762358115 nanos:916895795}" Nov 5 15:55:15.918139 containerd[1597]: time="2025-11-05T15:55:15.918088239Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd10db9ee566b4f44cd1f10001b72bbb40272c5841dbae24ec7eca9e81c44e5e\" id:\"fd10db9ee566b4f44cd1f10001b72bbb40272c5841dbae24ec7eca9e81c44e5e\" pid:3454 exited_at:{seconds:1762358115 nanos:916895795}" Nov 5 15:55:15.953001 kubelet[2757]: I1105 15:55:15.952820 2757 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 5 15:55:16.057416 systemd[1]: Created slice kubepods-besteffort-pod8859faf0_804b_4764_8e4a_299fd1e004ba.slice - libcontainer container kubepods-besteffort-pod8859faf0_804b_4764_8e4a_299fd1e004ba.slice. Nov 5 15:55:16.069141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd10db9ee566b4f44cd1f10001b72bbb40272c5841dbae24ec7eca9e81c44e5e-rootfs.mount: Deactivated successfully. Nov 5 15:55:16.118597 systemd[1]: Created slice kubepods-besteffort-pod86b40fbc_18e1_4614_aac7_5268cc15773b.slice - libcontainer container kubepods-besteffort-pod86b40fbc_18e1_4614_aac7_5268cc15773b.slice. Nov 5 15:55:16.129935 systemd[1]: Created slice kubepods-besteffort-podddc61783_6e23_40f0_a07f_5214382089f3.slice - libcontainer container kubepods-besteffort-podddc61783_6e23_40f0_a07f_5214382089f3.slice. Nov 5 15:55:16.140140 systemd[1]: Created slice kubepods-besteffort-podd5f77b74_d251_41a0_9423_d917b9539249.slice - libcontainer container kubepods-besteffort-podd5f77b74_d251_41a0_9423_d917b9539249.slice. Nov 5 15:55:16.150103 systemd[1]: Created slice kubepods-burstable-poda72b40de_cb8a_4802_8c34_e1af23d205bc.slice - libcontainer container kubepods-burstable-poda72b40de_cb8a_4802_8c34_e1af23d205bc.slice. Nov 5 15:55:16.153033 kubelet[2757]: I1105 15:55:16.151282 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6nq6\" (UniqueName: \"kubernetes.io/projected/a72b40de-cb8a-4802-8c34-e1af23d205bc-kube-api-access-m6nq6\") pod \"coredns-668d6bf9bc-v4gkt\" (UID: \"a72b40de-cb8a-4802-8c34-e1af23d205bc\") " pod="kube-system/coredns-668d6bf9bc-v4gkt" Nov 5 15:55:16.153033 kubelet[2757]: I1105 15:55:16.151317 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ddc61783-6e23-40f0-a07f-5214382089f3-calico-apiserver-certs\") pod \"calico-apiserver-57995d6575-xst49\" (UID: \"ddc61783-6e23-40f0-a07f-5214382089f3\") " pod="calico-apiserver/calico-apiserver-57995d6575-xst49" Nov 5 15:55:16.153033 kubelet[2757]: I1105 15:55:16.151353 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5kb7\" (UniqueName: \"kubernetes.io/projected/f1d50c3f-0506-4ceb-8aba-ac1f5be110f0-kube-api-access-j5kb7\") pod \"calico-apiserver-c8547764-tm5md\" (UID: \"f1d50c3f-0506-4ceb-8aba-ac1f5be110f0\") " pod="calico-apiserver/calico-apiserver-c8547764-tm5md" Nov 5 15:55:16.153033 kubelet[2757]: I1105 15:55:16.151373 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8859faf0-804b-4764-8e4a-299fd1e004ba-whisker-ca-bundle\") pod \"whisker-546dd8fd59-qtnb4\" (UID: \"8859faf0-804b-4764-8e4a-299fd1e004ba\") " pod="calico-system/whisker-546dd8fd59-qtnb4" Nov 5 15:55:16.153033 kubelet[2757]: I1105 15:55:16.151403 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/aa9bd767-dbec-475c-8411-c4b48f98eada-goldmane-key-pair\") pod \"goldmane-666569f655-rjpbz\" (UID: \"aa9bd767-dbec-475c-8411-c4b48f98eada\") " pod="calico-system/goldmane-666569f655-rjpbz" Nov 5 15:55:16.154448 kubelet[2757]: I1105 15:55:16.151423 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54bmv\" (UniqueName: \"kubernetes.io/projected/8859faf0-804b-4764-8e4a-299fd1e004ba-kube-api-access-54bmv\") pod \"whisker-546dd8fd59-qtnb4\" (UID: \"8859faf0-804b-4764-8e4a-299fd1e004ba\") " pod="calico-system/whisker-546dd8fd59-qtnb4" Nov 5 15:55:16.154448 kubelet[2757]: I1105 15:55:16.151479 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86b40fbc-18e1-4614-aac7-5268cc15773b-tigera-ca-bundle\") pod \"calico-kube-controllers-7b9468c484-bwwkq\" (UID: \"86b40fbc-18e1-4614-aac7-5268cc15773b\") " pod="calico-system/calico-kube-controllers-7b9468c484-bwwkq" Nov 5 15:55:16.154448 kubelet[2757]: I1105 15:55:16.151497 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grzbf\" (UniqueName: \"kubernetes.io/projected/86b40fbc-18e1-4614-aac7-5268cc15773b-kube-api-access-grzbf\") pod \"calico-kube-controllers-7b9468c484-bwwkq\" (UID: \"86b40fbc-18e1-4614-aac7-5268cc15773b\") " pod="calico-system/calico-kube-controllers-7b9468c484-bwwkq" Nov 5 15:55:16.154448 kubelet[2757]: I1105 15:55:16.151517 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ccbr\" (UniqueName: \"kubernetes.io/projected/d5f77b74-d251-41a0-9423-d917b9539249-kube-api-access-8ccbr\") pod \"calico-apiserver-57995d6575-xtv6f\" (UID: \"d5f77b74-d251-41a0-9423-d917b9539249\") " pod="calico-apiserver/calico-apiserver-57995d6575-xtv6f" Nov 5 15:55:16.154448 kubelet[2757]: I1105 15:55:16.151536 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a72b40de-cb8a-4802-8c34-e1af23d205bc-config-volume\") pod \"coredns-668d6bf9bc-v4gkt\" (UID: \"a72b40de-cb8a-4802-8c34-e1af23d205bc\") " pod="kube-system/coredns-668d6bf9bc-v4gkt" Nov 5 15:55:16.154684 kubelet[2757]: I1105 15:55:16.151607 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8859faf0-804b-4764-8e4a-299fd1e004ba-whisker-backend-key-pair\") pod \"whisker-546dd8fd59-qtnb4\" (UID: \"8859faf0-804b-4764-8e4a-299fd1e004ba\") " pod="calico-system/whisker-546dd8fd59-qtnb4" Nov 5 15:55:16.154684 kubelet[2757]: I1105 15:55:16.151627 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d5f77b74-d251-41a0-9423-d917b9539249-calico-apiserver-certs\") pod \"calico-apiserver-57995d6575-xtv6f\" (UID: \"d5f77b74-d251-41a0-9423-d917b9539249\") " pod="calico-apiserver/calico-apiserver-57995d6575-xtv6f" Nov 5 15:55:16.154684 kubelet[2757]: I1105 15:55:16.151642 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa9bd767-dbec-475c-8411-c4b48f98eada-goldmane-ca-bundle\") pod \"goldmane-666569f655-rjpbz\" (UID: \"aa9bd767-dbec-475c-8411-c4b48f98eada\") " pod="calico-system/goldmane-666569f655-rjpbz" Nov 5 15:55:16.154684 kubelet[2757]: I1105 15:55:16.151670 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa9bd767-dbec-475c-8411-c4b48f98eada-config\") pod \"goldmane-666569f655-rjpbz\" (UID: \"aa9bd767-dbec-475c-8411-c4b48f98eada\") " pod="calico-system/goldmane-666569f655-rjpbz" Nov 5 15:55:16.154684 kubelet[2757]: I1105 15:55:16.151685 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wt9f\" (UniqueName: \"kubernetes.io/projected/aa9bd767-dbec-475c-8411-c4b48f98eada-kube-api-access-7wt9f\") pod \"goldmane-666569f655-rjpbz\" (UID: \"aa9bd767-dbec-475c-8411-c4b48f98eada\") " pod="calico-system/goldmane-666569f655-rjpbz" Nov 5 15:55:16.154876 kubelet[2757]: I1105 15:55:16.151707 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh6kd\" (UniqueName: \"kubernetes.io/projected/ddc61783-6e23-40f0-a07f-5214382089f3-kube-api-access-kh6kd\") pod \"calico-apiserver-57995d6575-xst49\" (UID: \"ddc61783-6e23-40f0-a07f-5214382089f3\") " pod="calico-apiserver/calico-apiserver-57995d6575-xst49" Nov 5 15:55:16.154876 kubelet[2757]: I1105 15:55:16.151723 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f1d50c3f-0506-4ceb-8aba-ac1f5be110f0-calico-apiserver-certs\") pod \"calico-apiserver-c8547764-tm5md\" (UID: \"f1d50c3f-0506-4ceb-8aba-ac1f5be110f0\") " pod="calico-apiserver/calico-apiserver-c8547764-tm5md" Nov 5 15:55:16.154876 kubelet[2757]: I1105 15:55:16.151749 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4pdx\" (UniqueName: \"kubernetes.io/projected/6728cf2b-f0a2-4601-bd6f-e5e93e8220f6-kube-api-access-v4pdx\") pod \"coredns-668d6bf9bc-6nfsn\" (UID: \"6728cf2b-f0a2-4601-bd6f-e5e93e8220f6\") " pod="kube-system/coredns-668d6bf9bc-6nfsn" Nov 5 15:55:16.154876 kubelet[2757]: I1105 15:55:16.151768 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6728cf2b-f0a2-4601-bd6f-e5e93e8220f6-config-volume\") pod \"coredns-668d6bf9bc-6nfsn\" (UID: \"6728cf2b-f0a2-4601-bd6f-e5e93e8220f6\") " pod="kube-system/coredns-668d6bf9bc-6nfsn" Nov 5 15:55:16.162639 systemd[1]: Created slice kubepods-burstable-pod6728cf2b_f0a2_4601_bd6f_e5e93e8220f6.slice - libcontainer container kubepods-burstable-pod6728cf2b_f0a2_4601_bd6f_e5e93e8220f6.slice. Nov 5 15:55:16.174317 systemd[1]: Created slice kubepods-besteffort-podaa9bd767_dbec_475c_8411_c4b48f98eada.slice - libcontainer container kubepods-besteffort-podaa9bd767_dbec_475c_8411_c4b48f98eada.slice. Nov 5 15:55:16.189550 systemd[1]: Created slice kubepods-besteffort-podf1d50c3f_0506_4ceb_8aba_ac1f5be110f0.slice - libcontainer container kubepods-besteffort-podf1d50c3f_0506_4ceb_8aba_ac1f5be110f0.slice. Nov 5 15:55:16.421262 containerd[1597]: time="2025-11-05T15:55:16.421123790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-546dd8fd59-qtnb4,Uid:8859faf0-804b-4764-8e4a-299fd1e004ba,Namespace:calico-system,Attempt:0,}" Nov 5 15:55:16.427856 containerd[1597]: time="2025-11-05T15:55:16.427763129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b9468c484-bwwkq,Uid:86b40fbc-18e1-4614-aac7-5268cc15773b,Namespace:calico-system,Attempt:0,}" Nov 5 15:55:16.441884 containerd[1597]: time="2025-11-05T15:55:16.441618405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57995d6575-xst49,Uid:ddc61783-6e23-40f0-a07f-5214382089f3,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:55:16.454652 containerd[1597]: time="2025-11-05T15:55:16.454602003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57995d6575-xtv6f,Uid:d5f77b74-d251-41a0-9423-d917b9539249,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:55:16.460902 kubelet[2757]: E1105 15:55:16.460323 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:16.468208 kubelet[2757]: E1105 15:55:16.468082 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:16.469855 containerd[1597]: time="2025-11-05T15:55:16.469544724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6nfsn,Uid:6728cf2b-f0a2-4601-bd6f-e5e93e8220f6,Namespace:kube-system,Attempt:0,}" Nov 5 15:55:16.473620 containerd[1597]: time="2025-11-05T15:55:16.473505227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v4gkt,Uid:a72b40de-cb8a-4802-8c34-e1af23d205bc,Namespace:kube-system,Attempt:0,}" Nov 5 15:55:16.493655 containerd[1597]: time="2025-11-05T15:55:16.493612954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rjpbz,Uid:aa9bd767-dbec-475c-8411-c4b48f98eada,Namespace:calico-system,Attempt:0,}" Nov 5 15:55:16.498286 containerd[1597]: time="2025-11-05T15:55:16.498229103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8547764-tm5md,Uid:f1d50c3f-0506-4ceb-8aba-ac1f5be110f0,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:55:16.756081 kubelet[2757]: E1105 15:55:16.756037 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:16.768331 containerd[1597]: time="2025-11-05T15:55:16.767738062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 5 15:55:16.841153 containerd[1597]: time="2025-11-05T15:55:16.841103652Z" level=error msg="Failed to destroy network for sandbox \"6dc242d71a4f8d87cf49da0d24c9bbb81f520081aaa68b871effbbc9bad5e179\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:55:16.843823 containerd[1597]: time="2025-11-05T15:55:16.843750442Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-546dd8fd59-qtnb4,Uid:8859faf0-804b-4764-8e4a-299fd1e004ba,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dc242d71a4f8d87cf49da0d24c9bbb81f520081aaa68b871effbbc9bad5e179\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:55:16.849673 kubelet[2757]: E1105 15:55:16.849616 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dc242d71a4f8d87cf49da0d24c9bbb81f520081aaa68b871effbbc9bad5e179\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:55:16.849858 kubelet[2757]: E1105 15:55:16.849697 2757 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dc242d71a4f8d87cf49da0d24c9bbb81f520081aaa68b871effbbc9bad5e179\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-546dd8fd59-qtnb4" Nov 5 15:55:16.849858 kubelet[2757]: E1105 15:55:16.849719 2757 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dc242d71a4f8d87cf49da0d24c9bbb81f520081aaa68b871effbbc9bad5e179\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-546dd8fd59-qtnb4" Nov 5 15:55:16.853910 kubelet[2757]: E1105 15:55:16.850074 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-546dd8fd59-qtnb4_calico-system(8859faf0-804b-4764-8e4a-299fd1e004ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-546dd8fd59-qtnb4_calico-system(8859faf0-804b-4764-8e4a-299fd1e004ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6dc242d71a4f8d87cf49da0d24c9bbb81f520081aaa68b871effbbc9bad5e179\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-546dd8fd59-qtnb4" podUID="8859faf0-804b-4764-8e4a-299fd1e004ba" Nov 5 15:55:16.870261 containerd[1597]: time="2025-11-05T15:55:16.870213240Z" level=error msg="Failed to destroy network for sandbox \"57eba31b3b1b4b8361fd0be3cbb5b0d94587300f516c5b3ba7795171c74f185c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:55:16.871526 containerd[1597]: time="2025-11-05T15:55:16.871481883Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v4gkt,Uid:a72b40de-cb8a-4802-8c34-e1af23d205bc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"57eba31b3b1b4b8361fd0be3cbb5b0d94587300f516c5b3ba7795171c74f185c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:55:16.872106 kubelet[2757]: E1105 15:55:16.872067 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57eba31b3b1b4b8361fd0be3cbb5b0d94587300f516c5b3ba7795171c74f185c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:55:16.872201 kubelet[2757]: E1105 15:55:16.872127 2757 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57eba31b3b1b4b8361fd0be3cbb5b0d94587300f516c5b3ba7795171c74f185c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v4gkt" Nov 5 15:55:16.872201 kubelet[2757]: E1105 15:55:16.872153 2757 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57eba31b3b1b4b8361fd0be3cbb5b0d94587300f516c5b3ba7795171c74f185c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v4gkt" Nov 5 15:55:16.873412 kubelet[2757]: E1105 15:55:16.872444 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-v4gkt_kube-system(a72b40de-cb8a-4802-8c34-e1af23d205bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-v4gkt_kube-system(a72b40de-cb8a-4802-8c34-e1af23d205bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57eba31b3b1b4b8361fd0be3cbb5b0d94587300f516c5b3ba7795171c74f185c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-v4gkt" podUID="a72b40de-cb8a-4802-8c34-e1af23d205bc" Nov 5 15:55:16.912997 containerd[1597]: time="2025-11-05T15:55:16.912940882Z" level=error msg="Failed to destroy network for sandbox \"33ba5fa33e76c9acc4ab41a480b0b92cd29e7069dda43aa180c13a7c7bf92754\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:55:16.920126 containerd[1597]: time="2025-11-05T15:55:16.919827545Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b9468c484-bwwkq,Uid:86b40fbc-18e1-4614-aac7-5268cc15773b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"33ba5fa33e76c9acc4ab41a480b0b92cd29e7069dda43aa180c13a7c7bf92754\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:55:16.920473 containerd[1597]: time="2025-11-05T15:55:16.920180798Z" level=error msg="Failed to destroy network for sandbox \"1c03b61246408f76799b0df3e86ae6f5a87bb043f14f241833fd348c721a544f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:55:16.920520 containerd[1597]: time="2025-11-05T15:55:16.920443683Z" level=error msg="Failed to destroy network for sandbox \"b880fe69e86ca1d5f4371a6c0aa31d9569401070eff0dd8779924246c445d208\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:55:16.921698 kubelet[2757]: E1105 15:55:16.921258 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33ba5fa33e76c9acc4ab41a480b0b92cd29e7069dda43aa180c13a7c7bf92754\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:55:16.921698 kubelet[2757]: E1105 15:55:16.921416 2757 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33ba5fa33e76c9acc4ab41a480b0b92cd29e7069dda43aa180c13a7c7bf92754\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b9468c484-bwwkq" Nov 5 15:55:16.921698 kubelet[2757]: E1105 15:55:16.921456 2757 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33ba5fa33e76c9acc4ab41a480b0b92cd29e7069dda43aa180c13a7c7bf92754\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b9468c484-bwwkq" Nov 5 15:55:16.922568 kubelet[2757]: E1105 15:55:16.921522 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b9468c484-bwwkq_calico-system(86b40fbc-18e1-4614-aac7-5268cc15773b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b9468c484-bwwkq_calico-system(86b40fbc-18e1-4614-aac7-5268cc15773b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33ba5fa33e76c9acc4ab41a480b0b92cd29e7069dda43aa180c13a7c7bf92754\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b9468c484-bwwkq" podUID="86b40fbc-18e1-4614-aac7-5268cc15773b" Nov 5 15:55:16.922730 containerd[1597]: time="2025-11-05T15:55:16.922669881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57995d6575-xtv6f,Uid:d5f77b74-d251-41a0-9423-d917b9539249,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c03b61246408f76799b0df3e86ae6f5a87bb043f14f241833fd348c721a544f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:55:16.923231 kubelet[2757]: E1105 15:55:16.923164 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c03b61246408f76799b0df3e86ae6f5a87bb043f14f241833fd348c721a544f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:55:16.923304 kubelet[2757]: E1105 15:55:16.923253 2757 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c03b61246408f76799b0df3e86ae6f5a87bb043f14f241833fd348c721a544f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57995d6575-xtv6f" Nov 5 15:55:16.923304 kubelet[2757]: E1105 15:55:16.923294 2757 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c03b61246408f76799b0df3e86ae6f5a87bb043f14f241833fd348c721a544f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57995d6575-xtv6f" Nov 5 15:55:16.923791 kubelet[2757]: E1105 15:55:16.923611 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57995d6575-xtv6f_calico-apiserver(d5f77b74-d251-41a0-9423-d917b9539249)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57995d6575-xtv6f_calico-apiserver(d5f77b74-d251-41a0-9423-d917b9539249)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c03b61246408f76799b0df3e86ae6f5a87bb043f14f241833fd348c721a544f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57995d6575-xtv6f" podUID="d5f77b74-d251-41a0-9423-d917b9539249" Nov 5 15:55:16.924732 kubelet[2757]: E1105 15:55:16.924468 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b880fe69e86ca1d5f4371a6c0aa31d9569401070eff0dd8779924246c445d208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:55:16.924732 kubelet[2757]: E1105 15:55:16.924511 2757 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b880fe69e86ca1d5f4371a6c0aa31d9569401070eff0dd8779924246c445d208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c8547764-tm5md" Nov 5 15:55:16.924732 kubelet[2757]: E1105 15:55:16.924531 2757 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b880fe69e86ca1d5f4371a6c0aa31d9569401070eff0dd8779924246c445d208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c8547764-tm5md" Nov 5 15:55:16.924857 containerd[1597]: time="2025-11-05T15:55:16.924213213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8547764-tm5md,Uid:f1d50c3f-0506-4ceb-8aba-ac1f5be110f0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b880fe69e86ca1d5f4371a6c0aa31d9569401070eff0dd8779924246c445d208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:55:16.924857 containerd[1597]: time="2025-11-05T15:55:16.924753425Z" level=error msg="Failed to destroy network for sandbox \"c1c09a48159de575e93334a0d7430ab346f9202eaf6f2a3aed6174b3eb72006d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:55:16.925227 kubelet[2757]: E1105 15:55:16.924577 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c8547764-tm5md_calico-apiserver(f1d50c3f-0506-4ceb-8aba-ac1f5be110f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c8547764-tm5md_calico-apiserver(f1d50c3f-0506-4ceb-8aba-ac1f5be110f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b880fe69e86ca1d5f4371a6c0aa31d9569401070eff0dd8779924246c445d208\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c8547764-tm5md" podUID="f1d50c3f-0506-4ceb-8aba-ac1f5be110f0" Nov 5 15:55:16.927578 containerd[1597]: time="2025-11-05T15:55:16.927075136Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6nfsn,Uid:6728cf2b-f0a2-4601-bd6f-e5e93e8220f6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1c09a48159de575e93334a0d7430ab346f9202eaf6f2a3aed6174b3eb72006d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:55:16.929419 kubelet[2757]: E1105 15:55:16.928737 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1c09a48159de575e93334a0d7430ab346f9202eaf6f2a3aed6174b3eb72006d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:55:16.929419 kubelet[2757]: E1105 15:55:16.928893 2757 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1c09a48159de575e93334a0d7430ab346f9202eaf6f2a3aed6174b3eb72006d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6nfsn" Nov 5 15:55:16.929419 kubelet[2757]: E1105 15:55:16.928918 2757 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1c09a48159de575e93334a0d7430ab346f9202eaf6f2a3aed6174b3eb72006d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6nfsn" Nov 5 15:55:16.929621 kubelet[2757]: E1105 15:55:16.929053 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6nfsn_kube-system(6728cf2b-f0a2-4601-bd6f-e5e93e8220f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6nfsn_kube-system(6728cf2b-f0a2-4601-bd6f-e5e93e8220f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1c09a48159de575e93334a0d7430ab346f9202eaf6f2a3aed6174b3eb72006d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6nfsn" podUID="6728cf2b-f0a2-4601-bd6f-e5e93e8220f6" Nov 5 15:55:16.932315 containerd[1597]: time="2025-11-05T15:55:16.931512359Z" level=error msg="Failed to destroy network for sandbox \"8ebd49f258c68b6e6ec69d686c77240efc507bcada93461694f78863a5b6e3af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:55:16.932753 containerd[1597]: time="2025-11-05T15:55:16.932717882Z" level=error msg="Failed to destroy network for sandbox \"b4650a8b2445d2e24051fa35a645d6af96d88c66a28814c6a5c4b92397a5da88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:55:16.934050 containerd[1597]: time="2025-11-05T15:55:16.933069499Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57995d6575-xst49,Uid:ddc61783-6e23-40f0-a07f-5214382089f3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ebd49f258c68b6e6ec69d686c77240efc507bcada93461694f78863a5b6e3af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:55:16.934185 kubelet[2757]: E1105 15:55:16.933997 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ebd49f258c68b6e6ec69d686c77240efc507bcada93461694f78863a5b6e3af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:55:16.935372 kubelet[2757]: E1105 15:55:16.934284 2757 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ebd49f258c68b6e6ec69d686c77240efc507bcada93461694f78863a5b6e3af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57995d6575-xst49" Nov 5 15:55:16.935372 kubelet[2757]: E1105 15:55:16.934329 2757 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ebd49f258c68b6e6ec69d686c77240efc507bcada93461694f78863a5b6e3af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57995d6575-xst49" Nov 5 15:55:16.935606 kubelet[2757]: E1105 15:55:16.935455 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57995d6575-xst49_calico-apiserver(ddc61783-6e23-40f0-a07f-5214382089f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57995d6575-xst49_calico-apiserver(ddc61783-6e23-40f0-a07f-5214382089f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ebd49f258c68b6e6ec69d686c77240efc507bcada93461694f78863a5b6e3af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57995d6575-xst49" podUID="ddc61783-6e23-40f0-a07f-5214382089f3" Nov 5 15:55:16.935731 containerd[1597]: time="2025-11-05T15:55:16.935549288Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rjpbz,Uid:aa9bd767-dbec-475c-8411-c4b48f98eada,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4650a8b2445d2e24051fa35a645d6af96d88c66a28814c6a5c4b92397a5da88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:55:16.936188 kubelet[2757]: E1105 15:55:16.935911 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4650a8b2445d2e24051fa35a645d6af96d88c66a28814c6a5c4b92397a5da88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:55:16.936188 kubelet[2757]: E1105 15:55:16.935960 2757 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4650a8b2445d2e24051fa35a645d6af96d88c66a28814c6a5c4b92397a5da88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-rjpbz" Nov 5 15:55:16.936188 kubelet[2757]: E1105 15:55:16.935978 2757 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4650a8b2445d2e24051fa35a645d6af96d88c66a28814c6a5c4b92397a5da88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-rjpbz" Nov 5 15:55:16.936346 kubelet[2757]: E1105 15:55:16.936034 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-rjpbz_calico-system(aa9bd767-dbec-475c-8411-c4b48f98eada)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-rjpbz_calico-system(aa9bd767-dbec-475c-8411-c4b48f98eada)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4650a8b2445d2e24051fa35a645d6af96d88c66a28814c6a5c4b92397a5da88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-rjpbz" podUID="aa9bd767-dbec-475c-8411-c4b48f98eada" Nov 5 15:55:17.419225 systemd[1]: Created slice kubepods-besteffort-podca782dd5_c75b_4c0f_9e74_4db41ed6ac62.slice - libcontainer container kubepods-besteffort-podca782dd5_c75b_4c0f_9e74_4db41ed6ac62.slice. Nov 5 15:55:17.422680 containerd[1597]: time="2025-11-05T15:55:17.422636713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5fjkv,Uid:ca782dd5-c75b-4c0f-9e74-4db41ed6ac62,Namespace:calico-system,Attempt:0,}" Nov 5 15:55:17.490415 containerd[1597]: time="2025-11-05T15:55:17.490332416Z" level=error msg="Failed to destroy network for sandbox \"5440a96288bc8de535ea9a6472455658b03e7fe84a5f5e18493a56992edc0e5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:55:17.495415 containerd[1597]: time="2025-11-05T15:55:17.494798776Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5fjkv,Uid:ca782dd5-c75b-4c0f-9e74-4db41ed6ac62,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5440a96288bc8de535ea9a6472455658b03e7fe84a5f5e18493a56992edc0e5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:55:17.495604 kubelet[2757]: E1105 15:55:17.495064 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5440a96288bc8de535ea9a6472455658b03e7fe84a5f5e18493a56992edc0e5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:55:17.495604 kubelet[2757]: E1105 15:55:17.495161 2757 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5440a96288bc8de535ea9a6472455658b03e7fe84a5f5e18493a56992edc0e5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5fjkv" Nov 5 15:55:17.495604 kubelet[2757]: E1105 15:55:17.495194 2757 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5440a96288bc8de535ea9a6472455658b03e7fe84a5f5e18493a56992edc0e5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5fjkv" Nov 5 15:55:17.496002 kubelet[2757]: E1105 15:55:17.495238 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5fjkv_calico-system(ca782dd5-c75b-4c0f-9e74-4db41ed6ac62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5fjkv_calico-system(ca782dd5-c75b-4c0f-9e74-4db41ed6ac62)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5440a96288bc8de535ea9a6472455658b03e7fe84a5f5e18493a56992edc0e5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5fjkv" podUID="ca782dd5-c75b-4c0f-9e74-4db41ed6ac62" Nov 5 15:55:17.496268 systemd[1]: run-netns-cni\x2d55ba29ea\x2ddab9\x2d80e3\x2df9a3\x2d75ad21f82c67.mount: Deactivated successfully. Nov 5 15:55:24.759285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1260794796.mount: Deactivated successfully. Nov 5 15:55:24.963166 containerd[1597]: time="2025-11-05T15:55:24.956831649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:55:24.964794 containerd[1597]: time="2025-11-05T15:55:24.960540303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 5 15:55:24.979950 containerd[1597]: time="2025-11-05T15:55:24.979882569Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.212065377s" Nov 5 15:55:24.979950 containerd[1597]: time="2025-11-05T15:55:24.979936985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 5 15:55:24.993487 containerd[1597]: time="2025-11-05T15:55:24.993399777Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:55:24.994397 containerd[1597]: time="2025-11-05T15:55:24.994346576Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:55:25.047892 containerd[1597]: time="2025-11-05T15:55:25.047678388Z" level=info msg="CreateContainer within sandbox \"cd4a6e3a793573fbe5b2e59174db9f517c63e03fb6cd9a5fd3ff8deb7f4c0637\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 5 15:55:25.101424 containerd[1597]: time="2025-11-05T15:55:25.097880305Z" level=info msg="Container 1f73116df706bbeeb154b115552f57ea4572bc23e1ba547d9f10f31250c2285d: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:55:25.104757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount726334372.mount: Deactivated successfully. Nov 5 15:55:25.131227 containerd[1597]: time="2025-11-05T15:55:25.131162321Z" level=info msg="CreateContainer within sandbox \"cd4a6e3a793573fbe5b2e59174db9f517c63e03fb6cd9a5fd3ff8deb7f4c0637\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1f73116df706bbeeb154b115552f57ea4572bc23e1ba547d9f10f31250c2285d\"" Nov 5 15:55:25.131964 containerd[1597]: time="2025-11-05T15:55:25.131929664Z" level=info msg="StartContainer for \"1f73116df706bbeeb154b115552f57ea4572bc23e1ba547d9f10f31250c2285d\"" Nov 5 15:55:25.140174 containerd[1597]: time="2025-11-05T15:55:25.140111450Z" level=info msg="connecting to shim 1f73116df706bbeeb154b115552f57ea4572bc23e1ba547d9f10f31250c2285d" address="unix:///run/containerd/s/89dd5de62f908c82ad51551a84f5d6ec425614965fc158153ac13cf16bf66c85" protocol=ttrpc version=3 Nov 5 15:55:25.287669 systemd[1]: Started cri-containerd-1f73116df706bbeeb154b115552f57ea4572bc23e1ba547d9f10f31250c2285d.scope - libcontainer container 1f73116df706bbeeb154b115552f57ea4572bc23e1ba547d9f10f31250c2285d. Nov 5 15:55:25.399564 containerd[1597]: time="2025-11-05T15:55:25.399257631Z" level=info msg="StartContainer for \"1f73116df706bbeeb154b115552f57ea4572bc23e1ba547d9f10f31250c2285d\" returns successfully" Nov 5 15:55:25.564861 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 5 15:55:25.565921 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 5 15:55:25.798428 kubelet[2757]: E1105 15:55:25.797899 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:25.857100 kubelet[2757]: I1105 15:55:25.856915 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54bmv\" (UniqueName: \"kubernetes.io/projected/8859faf0-804b-4764-8e4a-299fd1e004ba-kube-api-access-54bmv\") pod \"8859faf0-804b-4764-8e4a-299fd1e004ba\" (UID: \"8859faf0-804b-4764-8e4a-299fd1e004ba\") " Nov 5 15:55:25.857100 kubelet[2757]: I1105 15:55:25.856983 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8859faf0-804b-4764-8e4a-299fd1e004ba-whisker-backend-key-pair\") pod \"8859faf0-804b-4764-8e4a-299fd1e004ba\" (UID: \"8859faf0-804b-4764-8e4a-299fd1e004ba\") " Nov 5 15:55:25.857100 kubelet[2757]: I1105 15:55:25.857009 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8859faf0-804b-4764-8e4a-299fd1e004ba-whisker-ca-bundle\") pod \"8859faf0-804b-4764-8e4a-299fd1e004ba\" (UID: \"8859faf0-804b-4764-8e4a-299fd1e004ba\") " Nov 5 15:55:25.898175 kubelet[2757]: I1105 15:55:25.896868 2757 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8859faf0-804b-4764-8e4a-299fd1e004ba-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8859faf0-804b-4764-8e4a-299fd1e004ba" (UID: "8859faf0-804b-4764-8e4a-299fd1e004ba"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 15:55:25.910650 systemd[1]: var-lib-kubelet-pods-8859faf0\x2d804b\x2d4764\x2d8e4a\x2d299fd1e004ba-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d54bmv.mount: Deactivated successfully. Nov 5 15:55:25.916884 systemd[1]: var-lib-kubelet-pods-8859faf0\x2d804b\x2d4764\x2d8e4a\x2d299fd1e004ba-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 5 15:55:25.920412 kubelet[2757]: I1105 15:55:25.918553 2757 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8859faf0-804b-4764-8e4a-299fd1e004ba-kube-api-access-54bmv" (OuterVolumeSpecName: "kube-api-access-54bmv") pod "8859faf0-804b-4764-8e4a-299fd1e004ba" (UID: "8859faf0-804b-4764-8e4a-299fd1e004ba"). InnerVolumeSpecName "kube-api-access-54bmv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 15:55:25.925296 kubelet[2757]: I1105 15:55:25.925061 2757 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8859faf0-804b-4764-8e4a-299fd1e004ba-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8859faf0-804b-4764-8e4a-299fd1e004ba" (UID: "8859faf0-804b-4764-8e4a-299fd1e004ba"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 15:55:25.959119 kubelet[2757]: I1105 15:55:25.959070 2757 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-54bmv\" (UniqueName: \"kubernetes.io/projected/8859faf0-804b-4764-8e4a-299fd1e004ba-kube-api-access-54bmv\") on node \"ci-4487.0.1-e-b20d930803\" DevicePath \"\"" Nov 5 15:55:25.959119 kubelet[2757]: I1105 15:55:25.959106 2757 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8859faf0-804b-4764-8e4a-299fd1e004ba-whisker-backend-key-pair\") on node \"ci-4487.0.1-e-b20d930803\" DevicePath \"\"" Nov 5 15:55:25.960060 kubelet[2757]: I1105 15:55:25.959342 2757 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8859faf0-804b-4764-8e4a-299fd1e004ba-whisker-ca-bundle\") on node \"ci-4487.0.1-e-b20d930803\" DevicePath \"\"" Nov 5 15:55:26.170479 containerd[1597]: time="2025-11-05T15:55:26.170291788Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f73116df706bbeeb154b115552f57ea4572bc23e1ba547d9f10f31250c2285d\" id:\"c556fd07b8a8610708e02eccf0384c0f30b7c203750cdc5df7c7eb459077c6c4\" pid:3813 exit_status:1 exited_at:{seconds:1762358126 nanos:155530291}" Nov 5 15:55:26.434827 systemd[1]: Removed slice kubepods-besteffort-pod8859faf0_804b_4764_8e4a_299fd1e004ba.slice - libcontainer container kubepods-besteffort-pod8859faf0_804b_4764_8e4a_299fd1e004ba.slice. Nov 5 15:55:26.801433 kubelet[2757]: E1105 15:55:26.801134 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:26.866679 kubelet[2757]: I1105 15:55:26.866582 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-x5277" podStartSLOduration=2.902738172 podStartE2EDuration="21.866552054s" podCreationTimestamp="2025-11-05 15:55:05 +0000 UTC" firstStartedPulling="2025-11-05 15:55:06.040641151 +0000 UTC m=+21.847111340" lastFinishedPulling="2025-11-05 15:55:25.004455027 +0000 UTC m=+40.810925222" observedRunningTime="2025-11-05 15:55:25.868747816 +0000 UTC m=+41.675218050" watchObservedRunningTime="2025-11-05 15:55:26.866552054 +0000 UTC m=+42.673022282" Nov 5 15:55:26.965914 systemd[1]: Created slice kubepods-besteffort-podedd0e550_b3db_4c4b_b6a7_951d0aaecf72.slice - libcontainer container kubepods-besteffort-podedd0e550_b3db_4c4b_b6a7_951d0aaecf72.slice. Nov 5 15:55:27.070520 kubelet[2757]: I1105 15:55:27.070333 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/edd0e550-b3db-4c4b-b6a7-951d0aaecf72-whisker-backend-key-pair\") pod \"whisker-d4fd8787-9gsmz\" (UID: \"edd0e550-b3db-4c4b-b6a7-951d0aaecf72\") " pod="calico-system/whisker-d4fd8787-9gsmz" Nov 5 15:55:27.071251 kubelet[2757]: I1105 15:55:27.070796 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lplbt\" (UniqueName: \"kubernetes.io/projected/edd0e550-b3db-4c4b-b6a7-951d0aaecf72-kube-api-access-lplbt\") pod \"whisker-d4fd8787-9gsmz\" (UID: \"edd0e550-b3db-4c4b-b6a7-951d0aaecf72\") " pod="calico-system/whisker-d4fd8787-9gsmz" Nov 5 15:55:27.071251 kubelet[2757]: I1105 15:55:27.070888 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edd0e550-b3db-4c4b-b6a7-951d0aaecf72-whisker-ca-bundle\") pod \"whisker-d4fd8787-9gsmz\" (UID: \"edd0e550-b3db-4c4b-b6a7-951d0aaecf72\") " pod="calico-system/whisker-d4fd8787-9gsmz" Nov 5 15:55:27.088513 containerd[1597]: time="2025-11-05T15:55:27.088450254Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f73116df706bbeeb154b115552f57ea4572bc23e1ba547d9f10f31250c2285d\" id:\"75e3acc0e1d8a75652469d753464f359f6f7951c05af66b11a109fa55068e634\" pid:3847 exit_status:1 exited_at:{seconds:1762358127 nanos:87717106}" Nov 5 15:55:27.273507 containerd[1597]: time="2025-11-05T15:55:27.273450072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d4fd8787-9gsmz,Uid:edd0e550-b3db-4c4b-b6a7-951d0aaecf72,Namespace:calico-system,Attempt:0,}" Nov 5 15:55:27.417126 containerd[1597]: time="2025-11-05T15:55:27.416883664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rjpbz,Uid:aa9bd767-dbec-475c-8411-c4b48f98eada,Namespace:calico-system,Attempt:0,}" Nov 5 15:55:27.419171 kubelet[2757]: E1105 15:55:27.418557 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:27.420071 containerd[1597]: time="2025-11-05T15:55:27.419089357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57995d6575-xst49,Uid:ddc61783-6e23-40f0-a07f-5214382089f3,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:55:27.422584 containerd[1597]: time="2025-11-05T15:55:27.422200914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6nfsn,Uid:6728cf2b-f0a2-4601-bd6f-e5e93e8220f6,Namespace:kube-system,Attempt:0,}" Nov 5 15:55:28.084638 systemd-networkd[1499]: cali2f0ad578ab7: Link UP Nov 5 15:55:28.085512 systemd-networkd[1499]: cali2f0ad578ab7: Gained carrier Nov 5 15:55:28.148989 containerd[1597]: 2025-11-05 15:55:27.695 [INFO][3977] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 15:55:28.148989 containerd[1597]: 2025-11-05 15:55:27.745 [INFO][3977] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--e--b20d930803-k8s-coredns--668d6bf9bc--6nfsn-eth0 coredns-668d6bf9bc- kube-system 6728cf2b-f0a2-4601-bd6f-e5e93e8220f6 903 0 2025-11-05 15:54:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4487.0.1-e-b20d930803 coredns-668d6bf9bc-6nfsn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2f0ad578ab7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3e70c85e503ba70e1366386a18274993f72c163a9a6988ec7ffa9d39db0a8273" Namespace="kube-system" Pod="coredns-668d6bf9bc-6nfsn" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-coredns--668d6bf9bc--6nfsn-" Nov 5 15:55:28.148989 containerd[1597]: 2025-11-05 15:55:27.745 [INFO][3977] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e70c85e503ba70e1366386a18274993f72c163a9a6988ec7ffa9d39db0a8273" Namespace="kube-system" Pod="coredns-668d6bf9bc-6nfsn" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-coredns--668d6bf9bc--6nfsn-eth0" Nov 5 15:55:28.148989 containerd[1597]: 2025-11-05 15:55:27.917 [INFO][4012] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e70c85e503ba70e1366386a18274993f72c163a9a6988ec7ffa9d39db0a8273" HandleID="k8s-pod-network.3e70c85e503ba70e1366386a18274993f72c163a9a6988ec7ffa9d39db0a8273" Workload="ci--4487.0.1--e--b20d930803-k8s-coredns--668d6bf9bc--6nfsn-eth0" Nov 5 15:55:28.149540 containerd[1597]: 2025-11-05 15:55:27.917 [INFO][4012] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3e70c85e503ba70e1366386a18274993f72c163a9a6988ec7ffa9d39db0a8273" HandleID="k8s-pod-network.3e70c85e503ba70e1366386a18274993f72c163a9a6988ec7ffa9d39db0a8273" Workload="ci--4487.0.1--e--b20d930803-k8s-coredns--668d6bf9bc--6nfsn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5750), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4487.0.1-e-b20d930803", "pod":"coredns-668d6bf9bc-6nfsn", "timestamp":"2025-11-05 15:55:27.917192165 +0000 UTC"}, Hostname:"ci-4487.0.1-e-b20d930803", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:55:28.149540 containerd[1597]: 2025-11-05 15:55:27.917 [INFO][4012] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:55:28.149540 containerd[1597]: 2025-11-05 15:55:27.920 [INFO][4012] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:55:28.149540 containerd[1597]: 2025-11-05 15:55:27.921 [INFO][4012] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-e-b20d930803' Nov 5 15:55:28.149540 containerd[1597]: 2025-11-05 15:55:27.940 [INFO][4012] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3e70c85e503ba70e1366386a18274993f72c163a9a6988ec7ffa9d39db0a8273" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.149540 containerd[1597]: 2025-11-05 15:55:27.956 [INFO][4012] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.149540 containerd[1597]: 2025-11-05 15:55:27.967 [INFO][4012] ipam/ipam.go 511: Trying affinity for 192.168.69.0/26 host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.149540 containerd[1597]: 2025-11-05 15:55:27.973 [INFO][4012] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.0/26 host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.149540 containerd[1597]: 2025-11-05 15:55:27.981 [INFO][4012] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.149942 containerd[1597]: 2025-11-05 15:55:27.981 [INFO][4012] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.3e70c85e503ba70e1366386a18274993f72c163a9a6988ec7ffa9d39db0a8273" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.149942 containerd[1597]: 2025-11-05 15:55:27.984 [INFO][4012] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3e70c85e503ba70e1366386a18274993f72c163a9a6988ec7ffa9d39db0a8273 Nov 5 15:55:28.149942 containerd[1597]: 2025-11-05 15:55:27.991 [INFO][4012] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.3e70c85e503ba70e1366386a18274993f72c163a9a6988ec7ffa9d39db0a8273" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.149942 containerd[1597]: 2025-11-05 15:55:28.002 [INFO][4012] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.1/26] block=192.168.69.0/26 handle="k8s-pod-network.3e70c85e503ba70e1366386a18274993f72c163a9a6988ec7ffa9d39db0a8273" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.149942 containerd[1597]: 2025-11-05 15:55:28.002 [INFO][4012] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.1/26] handle="k8s-pod-network.3e70c85e503ba70e1366386a18274993f72c163a9a6988ec7ffa9d39db0a8273" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.149942 containerd[1597]: 2025-11-05 15:55:28.004 [INFO][4012] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:55:28.149942 containerd[1597]: 2025-11-05 15:55:28.005 [INFO][4012] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.1/26] IPv6=[] ContainerID="3e70c85e503ba70e1366386a18274993f72c163a9a6988ec7ffa9d39db0a8273" HandleID="k8s-pod-network.3e70c85e503ba70e1366386a18274993f72c163a9a6988ec7ffa9d39db0a8273" Workload="ci--4487.0.1--e--b20d930803-k8s-coredns--668d6bf9bc--6nfsn-eth0" Nov 5 15:55:28.150229 containerd[1597]: 2025-11-05 15:55:28.023 [INFO][3977] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e70c85e503ba70e1366386a18274993f72c163a9a6988ec7ffa9d39db0a8273" Namespace="kube-system" Pod="coredns-668d6bf9bc-6nfsn" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-coredns--668d6bf9bc--6nfsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--e--b20d930803-k8s-coredns--668d6bf9bc--6nfsn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6728cf2b-f0a2-4601-bd6f-e5e93e8220f6", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 54, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-e-b20d930803", ContainerID:"", Pod:"coredns-668d6bf9bc-6nfsn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2f0ad578ab7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:28.150229 containerd[1597]: 2025-11-05 15:55:28.026 [INFO][3977] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.1/32] ContainerID="3e70c85e503ba70e1366386a18274993f72c163a9a6988ec7ffa9d39db0a8273" Namespace="kube-system" Pod="coredns-668d6bf9bc-6nfsn" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-coredns--668d6bf9bc--6nfsn-eth0" Nov 5 15:55:28.150229 containerd[1597]: 2025-11-05 15:55:28.026 [INFO][3977] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f0ad578ab7 ContainerID="3e70c85e503ba70e1366386a18274993f72c163a9a6988ec7ffa9d39db0a8273" Namespace="kube-system" Pod="coredns-668d6bf9bc-6nfsn" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-coredns--668d6bf9bc--6nfsn-eth0" Nov 5 15:55:28.150229 containerd[1597]: 2025-11-05 15:55:28.067 [INFO][3977] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e70c85e503ba70e1366386a18274993f72c163a9a6988ec7ffa9d39db0a8273" Namespace="kube-system" Pod="coredns-668d6bf9bc-6nfsn" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-coredns--668d6bf9bc--6nfsn-eth0" Nov 5 15:55:28.150229 containerd[1597]: 2025-11-05 15:55:28.068 [INFO][3977] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e70c85e503ba70e1366386a18274993f72c163a9a6988ec7ffa9d39db0a8273" Namespace="kube-system" Pod="coredns-668d6bf9bc-6nfsn" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-coredns--668d6bf9bc--6nfsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--e--b20d930803-k8s-coredns--668d6bf9bc--6nfsn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6728cf2b-f0a2-4601-bd6f-e5e93e8220f6", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 54, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-e-b20d930803", ContainerID:"3e70c85e503ba70e1366386a18274993f72c163a9a6988ec7ffa9d39db0a8273", Pod:"coredns-668d6bf9bc-6nfsn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2f0ad578ab7", MAC:"42:3e:81:fd:fc:39", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:28.150229 containerd[1597]: 2025-11-05 15:55:28.132 [INFO][3977] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e70c85e503ba70e1366386a18274993f72c163a9a6988ec7ffa9d39db0a8273" Namespace="kube-system" Pod="coredns-668d6bf9bc-6nfsn" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-coredns--668d6bf9bc--6nfsn-eth0" Nov 5 15:55:28.186769 systemd-networkd[1499]: califb0a7c0a644: Link UP Nov 5 15:55:28.188329 systemd-networkd[1499]: califb0a7c0a644: Gained carrier Nov 5 15:55:28.247631 containerd[1597]: 2025-11-05 15:55:27.620 [INFO][3960] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 15:55:28.247631 containerd[1597]: 2025-11-05 15:55:27.681 [INFO][3960] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--57995d6575--xst49-eth0 calico-apiserver-57995d6575- calico-apiserver ddc61783-6e23-40f0-a07f-5214382089f3 902 0 2025-11-05 15:54:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57995d6575 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4487.0.1-e-b20d930803 calico-apiserver-57995d6575-xst49 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califb0a7c0a644 [] [] }} ContainerID="3f22118fd7e30f79846dccb3a5676bb04525dd54469aac2ddae56eb77f210bf7" Namespace="calico-apiserver" Pod="calico-apiserver-57995d6575-xst49" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--57995d6575--xst49-" Nov 5 15:55:28.247631 containerd[1597]: 2025-11-05 15:55:27.681 [INFO][3960] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f22118fd7e30f79846dccb3a5676bb04525dd54469aac2ddae56eb77f210bf7" Namespace="calico-apiserver" Pod="calico-apiserver-57995d6575-xst49" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--57995d6575--xst49-eth0" Nov 5 15:55:28.247631 containerd[1597]: 2025-11-05 15:55:27.916 [INFO][3999] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f22118fd7e30f79846dccb3a5676bb04525dd54469aac2ddae56eb77f210bf7" HandleID="k8s-pod-network.3f22118fd7e30f79846dccb3a5676bb04525dd54469aac2ddae56eb77f210bf7" Workload="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--57995d6575--xst49-eth0" Nov 5 15:55:28.247631 containerd[1597]: 2025-11-05 15:55:27.920 [INFO][3999] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3f22118fd7e30f79846dccb3a5676bb04525dd54469aac2ddae56eb77f210bf7" HandleID="k8s-pod-network.3f22118fd7e30f79846dccb3a5676bb04525dd54469aac2ddae56eb77f210bf7" Workload="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--57995d6575--xst49-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003322a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4487.0.1-e-b20d930803", "pod":"calico-apiserver-57995d6575-xst49", "timestamp":"2025-11-05 15:55:27.916543186 +0000 UTC"}, Hostname:"ci-4487.0.1-e-b20d930803", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:55:28.247631 containerd[1597]: 2025-11-05 15:55:27.920 [INFO][3999] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:55:28.247631 containerd[1597]: 2025-11-05 15:55:28.003 [INFO][3999] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:55:28.247631 containerd[1597]: 2025-11-05 15:55:28.003 [INFO][3999] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-e-b20d930803' Nov 5 15:55:28.247631 containerd[1597]: 2025-11-05 15:55:28.041 [INFO][3999] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3f22118fd7e30f79846dccb3a5676bb04525dd54469aac2ddae56eb77f210bf7" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.247631 containerd[1597]: 2025-11-05 15:55:28.054 [INFO][3999] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.247631 containerd[1597]: 2025-11-05 15:55:28.070 [INFO][3999] ipam/ipam.go 511: Trying affinity for 192.168.69.0/26 host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.247631 containerd[1597]: 2025-11-05 15:55:28.077 [INFO][3999] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.0/26 host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.247631 containerd[1597]: 2025-11-05 15:55:28.089 [INFO][3999] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.247631 containerd[1597]: 2025-11-05 15:55:28.089 [INFO][3999] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.3f22118fd7e30f79846dccb3a5676bb04525dd54469aac2ddae56eb77f210bf7" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.247631 containerd[1597]: 2025-11-05 15:55:28.126 [INFO][3999] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3f22118fd7e30f79846dccb3a5676bb04525dd54469aac2ddae56eb77f210bf7 Nov 5 15:55:28.247631 containerd[1597]: 2025-11-05 15:55:28.140 [INFO][3999] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.3f22118fd7e30f79846dccb3a5676bb04525dd54469aac2ddae56eb77f210bf7" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.247631 containerd[1597]: 2025-11-05 15:55:28.153 [INFO][3999] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.2/26] block=192.168.69.0/26 handle="k8s-pod-network.3f22118fd7e30f79846dccb3a5676bb04525dd54469aac2ddae56eb77f210bf7" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.247631 containerd[1597]: 2025-11-05 15:55:28.154 [INFO][3999] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.2/26] handle="k8s-pod-network.3f22118fd7e30f79846dccb3a5676bb04525dd54469aac2ddae56eb77f210bf7" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.247631 containerd[1597]: 2025-11-05 15:55:28.154 [INFO][3999] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:55:28.247631 containerd[1597]: 2025-11-05 15:55:28.154 [INFO][3999] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.2/26] IPv6=[] ContainerID="3f22118fd7e30f79846dccb3a5676bb04525dd54469aac2ddae56eb77f210bf7" HandleID="k8s-pod-network.3f22118fd7e30f79846dccb3a5676bb04525dd54469aac2ddae56eb77f210bf7" Workload="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--57995d6575--xst49-eth0" Nov 5 15:55:28.249702 containerd[1597]: 2025-11-05 15:55:28.175 [INFO][3960] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f22118fd7e30f79846dccb3a5676bb04525dd54469aac2ddae56eb77f210bf7" Namespace="calico-apiserver" Pod="calico-apiserver-57995d6575-xst49" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--57995d6575--xst49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--57995d6575--xst49-eth0", GenerateName:"calico-apiserver-57995d6575-", Namespace:"calico-apiserver", SelfLink:"", UID:"ddc61783-6e23-40f0-a07f-5214382089f3", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 54, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57995d6575", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-e-b20d930803", ContainerID:"", Pod:"calico-apiserver-57995d6575-xst49", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb0a7c0a644", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:28.249702 containerd[1597]: 2025-11-05 15:55:28.176 [INFO][3960] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.2/32] ContainerID="3f22118fd7e30f79846dccb3a5676bb04525dd54469aac2ddae56eb77f210bf7" Namespace="calico-apiserver" Pod="calico-apiserver-57995d6575-xst49" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--57995d6575--xst49-eth0" Nov 5 15:55:28.249702 containerd[1597]: 2025-11-05 15:55:28.176 [INFO][3960] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb0a7c0a644 ContainerID="3f22118fd7e30f79846dccb3a5676bb04525dd54469aac2ddae56eb77f210bf7" Namespace="calico-apiserver" Pod="calico-apiserver-57995d6575-xst49" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--57995d6575--xst49-eth0" Nov 5 15:55:28.249702 containerd[1597]: 2025-11-05 15:55:28.191 [INFO][3960] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f22118fd7e30f79846dccb3a5676bb04525dd54469aac2ddae56eb77f210bf7" Namespace="calico-apiserver" Pod="calico-apiserver-57995d6575-xst49" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--57995d6575--xst49-eth0" Nov 5 15:55:28.249702 containerd[1597]: 2025-11-05 15:55:28.191 [INFO][3960] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f22118fd7e30f79846dccb3a5676bb04525dd54469aac2ddae56eb77f210bf7" Namespace="calico-apiserver" Pod="calico-apiserver-57995d6575-xst49" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--57995d6575--xst49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--57995d6575--xst49-eth0", GenerateName:"calico-apiserver-57995d6575-", Namespace:"calico-apiserver", SelfLink:"", UID:"ddc61783-6e23-40f0-a07f-5214382089f3", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 54, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57995d6575", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-e-b20d930803", ContainerID:"3f22118fd7e30f79846dccb3a5676bb04525dd54469aac2ddae56eb77f210bf7", Pod:"calico-apiserver-57995d6575-xst49", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb0a7c0a644", MAC:"c2:bc:e2:85:f3:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:28.249702 containerd[1597]: 2025-11-05 15:55:28.233 [INFO][3960] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f22118fd7e30f79846dccb3a5676bb04525dd54469aac2ddae56eb77f210bf7" Namespace="calico-apiserver" Pod="calico-apiserver-57995d6575-xst49" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--57995d6575--xst49-eth0" Nov 5 15:55:28.415553 kubelet[2757]: E1105 15:55:28.415050 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:28.419090 containerd[1597]: time="2025-11-05T15:55:28.418171579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v4gkt,Uid:a72b40de-cb8a-4802-8c34-e1af23d205bc,Namespace:kube-system,Attempt:0,}" Nov 5 15:55:28.436896 containerd[1597]: time="2025-11-05T15:55:28.436847132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57995d6575-xtv6f,Uid:d5f77b74-d251-41a0-9423-d917b9539249,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:55:28.440964 kubelet[2757]: I1105 15:55:28.440892 2757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8859faf0-804b-4764-8e4a-299fd1e004ba" path="/var/lib/kubelet/pods/8859faf0-804b-4764-8e4a-299fd1e004ba/volumes" Nov 5 15:55:28.510855 containerd[1597]: time="2025-11-05T15:55:28.510743198Z" level=info msg="connecting to shim 3e70c85e503ba70e1366386a18274993f72c163a9a6988ec7ffa9d39db0a8273" address="unix:///run/containerd/s/db412a040a6c1b0fddcfe5e63587d1143093c90862d71539f1ec94e7a4949998" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:55:28.523191 systemd-networkd[1499]: calid2ff80ccc8d: Link UP Nov 5 15:55:28.526728 containerd[1597]: time="2025-11-05T15:55:28.526674148Z" level=info msg="connecting to shim 3f22118fd7e30f79846dccb3a5676bb04525dd54469aac2ddae56eb77f210bf7" address="unix:///run/containerd/s/d3d70fe8d6cd6ad7870086d841b3640757429cb2ca3e5b542388f0b99df96269" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:55:28.535695 systemd-networkd[1499]: calid2ff80ccc8d: Gained carrier Nov 5 15:55:28.697590 systemd[1]: Started cri-containerd-3f22118fd7e30f79846dccb3a5676bb04525dd54469aac2ddae56eb77f210bf7.scope - libcontainer container 3f22118fd7e30f79846dccb3a5676bb04525dd54469aac2ddae56eb77f210bf7. Nov 5 15:55:28.717040 containerd[1597]: 2025-11-05 15:55:27.369 [INFO][3913] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 15:55:28.717040 containerd[1597]: 2025-11-05 15:55:27.409 [INFO][3913] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--e--b20d930803-k8s-whisker--d4fd8787--9gsmz-eth0 whisker-d4fd8787- calico-system edd0e550-b3db-4c4b-b6a7-951d0aaecf72 977 0 2025-11-05 15:55:26 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:d4fd8787 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4487.0.1-e-b20d930803 whisker-d4fd8787-9gsmz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid2ff80ccc8d [] [] }} ContainerID="02f147708073af8d448b52ea30cbdaaa61e212c33553ae456f1ab06bc6ab10bd" Namespace="calico-system" Pod="whisker-d4fd8787-9gsmz" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-whisker--d4fd8787--9gsmz-" Nov 5 15:55:28.717040 containerd[1597]: 2025-11-05 15:55:27.409 [INFO][3913] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="02f147708073af8d448b52ea30cbdaaa61e212c33553ae456f1ab06bc6ab10bd" Namespace="calico-system" Pod="whisker-d4fd8787-9gsmz" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-whisker--d4fd8787--9gsmz-eth0" Nov 5 15:55:28.717040 containerd[1597]: 2025-11-05 15:55:27.917 [INFO][3956] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02f147708073af8d448b52ea30cbdaaa61e212c33553ae456f1ab06bc6ab10bd" HandleID="k8s-pod-network.02f147708073af8d448b52ea30cbdaaa61e212c33553ae456f1ab06bc6ab10bd" Workload="ci--4487.0.1--e--b20d930803-k8s-whisker--d4fd8787--9gsmz-eth0" Nov 5 15:55:28.717040 containerd[1597]: 2025-11-05 15:55:27.917 [INFO][3956] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="02f147708073af8d448b52ea30cbdaaa61e212c33553ae456f1ab06bc6ab10bd" HandleID="k8s-pod-network.02f147708073af8d448b52ea30cbdaaa61e212c33553ae456f1ab06bc6ab10bd" Workload="ci--4487.0.1--e--b20d930803-k8s-whisker--d4fd8787--9gsmz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f740), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.1-e-b20d930803", "pod":"whisker-d4fd8787-9gsmz", "timestamp":"2025-11-05 15:55:27.917052229 +0000 UTC"}, Hostname:"ci-4487.0.1-e-b20d930803", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:55:28.717040 containerd[1597]: 2025-11-05 15:55:27.917 [INFO][3956] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:55:28.717040 containerd[1597]: 2025-11-05 15:55:28.154 [INFO][3956] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:55:28.717040 containerd[1597]: 2025-11-05 15:55:28.154 [INFO][3956] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-e-b20d930803' Nov 5 15:55:28.717040 containerd[1597]: 2025-11-05 15:55:28.227 [INFO][3956] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.02f147708073af8d448b52ea30cbdaaa61e212c33553ae456f1ab06bc6ab10bd" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.717040 containerd[1597]: 2025-11-05 15:55:28.251 [INFO][3956] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.717040 containerd[1597]: 2025-11-05 15:55:28.338 [INFO][3956] ipam/ipam.go 511: Trying affinity for 192.168.69.0/26 host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.717040 containerd[1597]: 2025-11-05 15:55:28.345 [INFO][3956] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.0/26 host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.717040 containerd[1597]: 2025-11-05 15:55:28.362 [INFO][3956] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.717040 containerd[1597]: 2025-11-05 15:55:28.363 [INFO][3956] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.02f147708073af8d448b52ea30cbdaaa61e212c33553ae456f1ab06bc6ab10bd" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.717040 containerd[1597]: 2025-11-05 15:55:28.372 [INFO][3956] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.02f147708073af8d448b52ea30cbdaaa61e212c33553ae456f1ab06bc6ab10bd Nov 5 15:55:28.717040 containerd[1597]: 2025-11-05 15:55:28.396 [INFO][3956] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.02f147708073af8d448b52ea30cbdaaa61e212c33553ae456f1ab06bc6ab10bd" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.717040 containerd[1597]: 2025-11-05 15:55:28.459 [INFO][3956] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.3/26] block=192.168.69.0/26 handle="k8s-pod-network.02f147708073af8d448b52ea30cbdaaa61e212c33553ae456f1ab06bc6ab10bd" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.717040 containerd[1597]: 2025-11-05 15:55:28.465 [INFO][3956] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.3/26] handle="k8s-pod-network.02f147708073af8d448b52ea30cbdaaa61e212c33553ae456f1ab06bc6ab10bd" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.717040 containerd[1597]: 2025-11-05 15:55:28.465 [INFO][3956] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:55:28.717040 containerd[1597]: 2025-11-05 15:55:28.465 [INFO][3956] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.3/26] IPv6=[] ContainerID="02f147708073af8d448b52ea30cbdaaa61e212c33553ae456f1ab06bc6ab10bd" HandleID="k8s-pod-network.02f147708073af8d448b52ea30cbdaaa61e212c33553ae456f1ab06bc6ab10bd" Workload="ci--4487.0.1--e--b20d930803-k8s-whisker--d4fd8787--9gsmz-eth0" Nov 5 15:55:28.720493 containerd[1597]: 2025-11-05 15:55:28.502 [INFO][3913] cni-plugin/k8s.go 418: Populated endpoint ContainerID="02f147708073af8d448b52ea30cbdaaa61e212c33553ae456f1ab06bc6ab10bd" Namespace="calico-system" Pod="whisker-d4fd8787-9gsmz" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-whisker--d4fd8787--9gsmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--e--b20d930803-k8s-whisker--d4fd8787--9gsmz-eth0", GenerateName:"whisker-d4fd8787-", Namespace:"calico-system", SelfLink:"", UID:"edd0e550-b3db-4c4b-b6a7-951d0aaecf72", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 55, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d4fd8787", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-e-b20d930803", ContainerID:"", Pod:"whisker-d4fd8787-9gsmz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.69.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid2ff80ccc8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:28.720493 containerd[1597]: 2025-11-05 15:55:28.504 [INFO][3913] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.3/32] ContainerID="02f147708073af8d448b52ea30cbdaaa61e212c33553ae456f1ab06bc6ab10bd" Namespace="calico-system" Pod="whisker-d4fd8787-9gsmz" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-whisker--d4fd8787--9gsmz-eth0" Nov 5 15:55:28.720493 containerd[1597]: 2025-11-05 15:55:28.504 [INFO][3913] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid2ff80ccc8d ContainerID="02f147708073af8d448b52ea30cbdaaa61e212c33553ae456f1ab06bc6ab10bd" Namespace="calico-system" Pod="whisker-d4fd8787-9gsmz" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-whisker--d4fd8787--9gsmz-eth0" Nov 5 15:55:28.720493 containerd[1597]: 2025-11-05 15:55:28.567 [INFO][3913] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02f147708073af8d448b52ea30cbdaaa61e212c33553ae456f1ab06bc6ab10bd" Namespace="calico-system" Pod="whisker-d4fd8787-9gsmz" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-whisker--d4fd8787--9gsmz-eth0" Nov 5 15:55:28.720493 containerd[1597]: 2025-11-05 15:55:28.570 [INFO][3913] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="02f147708073af8d448b52ea30cbdaaa61e212c33553ae456f1ab06bc6ab10bd" Namespace="calico-system" Pod="whisker-d4fd8787-9gsmz" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-whisker--d4fd8787--9gsmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--e--b20d930803-k8s-whisker--d4fd8787--9gsmz-eth0", GenerateName:"whisker-d4fd8787-", Namespace:"calico-system", SelfLink:"", UID:"edd0e550-b3db-4c4b-b6a7-951d0aaecf72", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 55, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d4fd8787", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-e-b20d930803", ContainerID:"02f147708073af8d448b52ea30cbdaaa61e212c33553ae456f1ab06bc6ab10bd", Pod:"whisker-d4fd8787-9gsmz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.69.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid2ff80ccc8d", MAC:"52:10:b2:04:64:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:28.720493 containerd[1597]: 2025-11-05 15:55:28.677 [INFO][3913] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="02f147708073af8d448b52ea30cbdaaa61e212c33553ae456f1ab06bc6ab10bd" Namespace="calico-system" Pod="whisker-d4fd8787-9gsmz" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-whisker--d4fd8787--9gsmz-eth0" Nov 5 15:55:28.730848 systemd[1]: Started cri-containerd-3e70c85e503ba70e1366386a18274993f72c163a9a6988ec7ffa9d39db0a8273.scope - libcontainer container 3e70c85e503ba70e1366386a18274993f72c163a9a6988ec7ffa9d39db0a8273. Nov 5 15:55:28.831778 containerd[1597]: time="2025-11-05T15:55:28.831719585Z" level=info msg="connecting to shim 02f147708073af8d448b52ea30cbdaaa61e212c33553ae456f1ab06bc6ab10bd" address="unix:///run/containerd/s/7d1cba85c479cd92986f876d0ea0c8275aae7a0367fadb6f5e75e3edf55f2581" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:55:28.860545 systemd-networkd[1499]: cali3a8e5f383f7: Link UP Nov 5 15:55:28.865511 systemd-networkd[1499]: cali3a8e5f383f7: Gained carrier Nov 5 15:55:28.917018 systemd[1]: Started cri-containerd-02f147708073af8d448b52ea30cbdaaa61e212c33553ae456f1ab06bc6ab10bd.scope - libcontainer container 02f147708073af8d448b52ea30cbdaaa61e212c33553ae456f1ab06bc6ab10bd. Nov 5 15:55:28.942684 containerd[1597]: 2025-11-05 15:55:27.677 [INFO][3952] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 15:55:28.942684 containerd[1597]: 2025-11-05 15:55:27.726 [INFO][3952] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--e--b20d930803-k8s-goldmane--666569f655--rjpbz-eth0 goldmane-666569f655- calico-system aa9bd767-dbec-475c-8411-c4b48f98eada 901 0 2025-11-05 15:55:03 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4487.0.1-e-b20d930803 goldmane-666569f655-rjpbz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3a8e5f383f7 [] [] }} ContainerID="7e1ce445f4afa33e1465c4c9d0c18fbb9b9513f078882e6f9dd427aed96db1f8" Namespace="calico-system" Pod="goldmane-666569f655-rjpbz" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-goldmane--666569f655--rjpbz-" Nov 5 15:55:28.942684 containerd[1597]: 2025-11-05 15:55:27.727 [INFO][3952] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e1ce445f4afa33e1465c4c9d0c18fbb9b9513f078882e6f9dd427aed96db1f8" Namespace="calico-system" Pod="goldmane-666569f655-rjpbz" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-goldmane--666569f655--rjpbz-eth0" Nov 5 15:55:28.942684 containerd[1597]: 2025-11-05 15:55:27.930 [INFO][4006] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e1ce445f4afa33e1465c4c9d0c18fbb9b9513f078882e6f9dd427aed96db1f8" HandleID="k8s-pod-network.7e1ce445f4afa33e1465c4c9d0c18fbb9b9513f078882e6f9dd427aed96db1f8" Workload="ci--4487.0.1--e--b20d930803-k8s-goldmane--666569f655--rjpbz-eth0" Nov 5 15:55:28.942684 containerd[1597]: 2025-11-05 15:55:27.937 [INFO][4006] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7e1ce445f4afa33e1465c4c9d0c18fbb9b9513f078882e6f9dd427aed96db1f8" HandleID="k8s-pod-network.7e1ce445f4afa33e1465c4c9d0c18fbb9b9513f078882e6f9dd427aed96db1f8" Workload="ci--4487.0.1--e--b20d930803-k8s-goldmane--666569f655--rjpbz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000366c90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.1-e-b20d930803", "pod":"goldmane-666569f655-rjpbz", "timestamp":"2025-11-05 15:55:27.920378439 +0000 UTC"}, Hostname:"ci-4487.0.1-e-b20d930803", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:55:28.942684 containerd[1597]: 2025-11-05 15:55:27.937 [INFO][4006] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:55:28.942684 containerd[1597]: 2025-11-05 15:55:28.465 [INFO][4006] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:55:28.942684 containerd[1597]: 2025-11-05 15:55:28.465 [INFO][4006] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-e-b20d930803' Nov 5 15:55:28.942684 containerd[1597]: 2025-11-05 15:55:28.480 [INFO][4006] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7e1ce445f4afa33e1465c4c9d0c18fbb9b9513f078882e6f9dd427aed96db1f8" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.942684 containerd[1597]: 2025-11-05 15:55:28.527 [INFO][4006] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.942684 containerd[1597]: 2025-11-05 15:55:28.683 [INFO][4006] ipam/ipam.go 511: Trying affinity for 192.168.69.0/26 host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.942684 containerd[1597]: 2025-11-05 15:55:28.709 [INFO][4006] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.0/26 host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.942684 containerd[1597]: 2025-11-05 15:55:28.734 [INFO][4006] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.942684 containerd[1597]: 2025-11-05 15:55:28.735 [INFO][4006] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.7e1ce445f4afa33e1465c4c9d0c18fbb9b9513f078882e6f9dd427aed96db1f8" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.942684 containerd[1597]: 2025-11-05 15:55:28.741 [INFO][4006] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7e1ce445f4afa33e1465c4c9d0c18fbb9b9513f078882e6f9dd427aed96db1f8 Nov 5 15:55:28.942684 containerd[1597]: 2025-11-05 15:55:28.758 [INFO][4006] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.7e1ce445f4afa33e1465c4c9d0c18fbb9b9513f078882e6f9dd427aed96db1f8" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.942684 containerd[1597]: 2025-11-05 15:55:28.812 [INFO][4006] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.4/26] block=192.168.69.0/26 handle="k8s-pod-network.7e1ce445f4afa33e1465c4c9d0c18fbb9b9513f078882e6f9dd427aed96db1f8" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.942684 containerd[1597]: 2025-11-05 15:55:28.814 [INFO][4006] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.4/26] handle="k8s-pod-network.7e1ce445f4afa33e1465c4c9d0c18fbb9b9513f078882e6f9dd427aed96db1f8" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:28.942684 containerd[1597]: 2025-11-05 15:55:28.814 [INFO][4006] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:55:28.942684 containerd[1597]: 2025-11-05 15:55:28.814 [INFO][4006] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.4/26] IPv6=[] ContainerID="7e1ce445f4afa33e1465c4c9d0c18fbb9b9513f078882e6f9dd427aed96db1f8" HandleID="k8s-pod-network.7e1ce445f4afa33e1465c4c9d0c18fbb9b9513f078882e6f9dd427aed96db1f8" Workload="ci--4487.0.1--e--b20d930803-k8s-goldmane--666569f655--rjpbz-eth0" Nov 5 15:55:28.945245 containerd[1597]: 2025-11-05 15:55:28.841 [INFO][3952] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e1ce445f4afa33e1465c4c9d0c18fbb9b9513f078882e6f9dd427aed96db1f8" Namespace="calico-system" Pod="goldmane-666569f655-rjpbz" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-goldmane--666569f655--rjpbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--e--b20d930803-k8s-goldmane--666569f655--rjpbz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"aa9bd767-dbec-475c-8411-c4b48f98eada", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 55, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-e-b20d930803", ContainerID:"", Pod:"goldmane-666569f655-rjpbz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.69.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3a8e5f383f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:28.945245 containerd[1597]: 2025-11-05 15:55:28.841 [INFO][3952] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.4/32] ContainerID="7e1ce445f4afa33e1465c4c9d0c18fbb9b9513f078882e6f9dd427aed96db1f8" Namespace="calico-system" Pod="goldmane-666569f655-rjpbz" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-goldmane--666569f655--rjpbz-eth0" Nov 5 15:55:28.945245 containerd[1597]: 2025-11-05 15:55:28.841 [INFO][3952] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a8e5f383f7 ContainerID="7e1ce445f4afa33e1465c4c9d0c18fbb9b9513f078882e6f9dd427aed96db1f8" Namespace="calico-system" Pod="goldmane-666569f655-rjpbz" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-goldmane--666569f655--rjpbz-eth0" Nov 5 15:55:28.945245 containerd[1597]: 2025-11-05 15:55:28.869 [INFO][3952] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e1ce445f4afa33e1465c4c9d0c18fbb9b9513f078882e6f9dd427aed96db1f8" Namespace="calico-system" Pod="goldmane-666569f655-rjpbz" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-goldmane--666569f655--rjpbz-eth0" Nov 5 15:55:28.945245 containerd[1597]: 2025-11-05 15:55:28.881 [INFO][3952] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e1ce445f4afa33e1465c4c9d0c18fbb9b9513f078882e6f9dd427aed96db1f8" Namespace="calico-system" Pod="goldmane-666569f655-rjpbz" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-goldmane--666569f655--rjpbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--e--b20d930803-k8s-goldmane--666569f655--rjpbz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"aa9bd767-dbec-475c-8411-c4b48f98eada", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 55, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-e-b20d930803", ContainerID:"7e1ce445f4afa33e1465c4c9d0c18fbb9b9513f078882e6f9dd427aed96db1f8", Pod:"goldmane-666569f655-rjpbz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.69.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3a8e5f383f7", MAC:"42:c8:b2:36:ec:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:28.945245 containerd[1597]: 2025-11-05 15:55:28.926 [INFO][3952] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e1ce445f4afa33e1465c4c9d0c18fbb9b9513f078882e6f9dd427aed96db1f8" Namespace="calico-system" Pod="goldmane-666569f655-rjpbz" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-goldmane--666569f655--rjpbz-eth0" Nov 5 15:55:29.030702 containerd[1597]: time="2025-11-05T15:55:29.030644719Z" level=info msg="connecting to shim 7e1ce445f4afa33e1465c4c9d0c18fbb9b9513f078882e6f9dd427aed96db1f8" address="unix:///run/containerd/s/c031d5e50a276ded208ee3969378381b8e124e9c5a5eccb0747dfdc25b4fa18f" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:55:29.135765 systemd[1]: Started cri-containerd-7e1ce445f4afa33e1465c4c9d0c18fbb9b9513f078882e6f9dd427aed96db1f8.scope - libcontainer container 7e1ce445f4afa33e1465c4c9d0c18fbb9b9513f078882e6f9dd427aed96db1f8. Nov 5 15:55:29.162767 containerd[1597]: time="2025-11-05T15:55:29.162433171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6nfsn,Uid:6728cf2b-f0a2-4601-bd6f-e5e93e8220f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e70c85e503ba70e1366386a18274993f72c163a9a6988ec7ffa9d39db0a8273\"" Nov 5 15:55:29.167334 kubelet[2757]: E1105 15:55:29.166555 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:29.178713 containerd[1597]: time="2025-11-05T15:55:29.178197508Z" level=info msg="CreateContainer within sandbox \"3e70c85e503ba70e1366386a18274993f72c163a9a6988ec7ffa9d39db0a8273\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:55:29.220417 systemd-networkd[1499]: califb0a7c0a644: Gained IPv6LL Nov 5 15:55:29.241914 systemd-networkd[1499]: cali882b8573945: Link UP Nov 5 15:55:29.256343 systemd-networkd[1499]: cali882b8573945: Gained carrier Nov 5 15:55:29.287046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3806828747.mount: Deactivated successfully. Nov 5 15:55:29.291487 containerd[1597]: time="2025-11-05T15:55:29.288885483Z" level=info msg="Container fe5d24e600561bea0879bd3ac6ae540ea2bcbf317eeeb3bb17fe15ff75087c4f: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:55:29.305425 containerd[1597]: time="2025-11-05T15:55:29.305256499Z" level=info msg="CreateContainer within sandbox \"3e70c85e503ba70e1366386a18274993f72c163a9a6988ec7ffa9d39db0a8273\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fe5d24e600561bea0879bd3ac6ae540ea2bcbf317eeeb3bb17fe15ff75087c4f\"" Nov 5 15:55:29.310835 containerd[1597]: time="2025-11-05T15:55:29.310688568Z" level=info msg="StartContainer for \"fe5d24e600561bea0879bd3ac6ae540ea2bcbf317eeeb3bb17fe15ff75087c4f\"" Nov 5 15:55:29.316592 containerd[1597]: time="2025-11-05T15:55:29.316481983Z" level=info msg="connecting to shim fe5d24e600561bea0879bd3ac6ae540ea2bcbf317eeeb3bb17fe15ff75087c4f" address="unix:///run/containerd/s/db412a040a6c1b0fddcfe5e63587d1143093c90862d71539f1ec94e7a4949998" protocol=ttrpc version=3 Nov 5 15:55:29.369668 systemd[1]: Started cri-containerd-fe5d24e600561bea0879bd3ac6ae540ea2bcbf317eeeb3bb17fe15ff75087c4f.scope - libcontainer container fe5d24e600561bea0879bd3ac6ae540ea2bcbf317eeeb3bb17fe15ff75087c4f. Nov 5 15:55:29.370580 containerd[1597]: 2025-11-05 15:55:28.678 [INFO][4078] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--57995d6575--xtv6f-eth0 calico-apiserver-57995d6575- calico-apiserver d5f77b74-d251-41a0-9423-d917b9539249 898 0 2025-11-05 15:54:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57995d6575 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4487.0.1-e-b20d930803 calico-apiserver-57995d6575-xtv6f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali882b8573945 [] [] }} ContainerID="20314b1aab8030c628af9f0a1e29809027d86ec133897387a211c45be626f1b9" Namespace="calico-apiserver" Pod="calico-apiserver-57995d6575-xtv6f" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--57995d6575--xtv6f-" Nov 5 15:55:29.370580 containerd[1597]: 2025-11-05 15:55:28.678 [INFO][4078] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="20314b1aab8030c628af9f0a1e29809027d86ec133897387a211c45be626f1b9" Namespace="calico-apiserver" Pod="calico-apiserver-57995d6575-xtv6f" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--57995d6575--xtv6f-eth0" Nov 5 15:55:29.370580 containerd[1597]: 2025-11-05 15:55:28.940 [INFO][4157] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20314b1aab8030c628af9f0a1e29809027d86ec133897387a211c45be626f1b9" HandleID="k8s-pod-network.20314b1aab8030c628af9f0a1e29809027d86ec133897387a211c45be626f1b9" Workload="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--57995d6575--xtv6f-eth0" Nov 5 15:55:29.370580 containerd[1597]: 2025-11-05 15:55:28.944 [INFO][4157] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="20314b1aab8030c628af9f0a1e29809027d86ec133897387a211c45be626f1b9" HandleID="k8s-pod-network.20314b1aab8030c628af9f0a1e29809027d86ec133897387a211c45be626f1b9" Workload="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--57995d6575--xtv6f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003310e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4487.0.1-e-b20d930803", "pod":"calico-apiserver-57995d6575-xtv6f", "timestamp":"2025-11-05 15:55:28.940340836 +0000 UTC"}, Hostname:"ci-4487.0.1-e-b20d930803", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:55:29.370580 containerd[1597]: 2025-11-05 15:55:28.945 [INFO][4157] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:55:29.370580 containerd[1597]: 2025-11-05 15:55:28.945 [INFO][4157] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:55:29.370580 containerd[1597]: 2025-11-05 15:55:28.946 [INFO][4157] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-e-b20d930803' Nov 5 15:55:29.370580 containerd[1597]: 2025-11-05 15:55:28.973 [INFO][4157] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.20314b1aab8030c628af9f0a1e29809027d86ec133897387a211c45be626f1b9" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:29.370580 containerd[1597]: 2025-11-05 15:55:29.010 [INFO][4157] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:29.370580 containerd[1597]: 2025-11-05 15:55:29.055 [INFO][4157] ipam/ipam.go 511: Trying affinity for 192.168.69.0/26 host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:29.370580 containerd[1597]: 2025-11-05 15:55:29.070 [INFO][4157] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.0/26 host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:29.370580 containerd[1597]: 2025-11-05 15:55:29.087 [INFO][4157] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:29.370580 containerd[1597]: 2025-11-05 15:55:29.090 [INFO][4157] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.20314b1aab8030c628af9f0a1e29809027d86ec133897387a211c45be626f1b9" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:29.370580 containerd[1597]: 2025-11-05 15:55:29.101 [INFO][4157] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.20314b1aab8030c628af9f0a1e29809027d86ec133897387a211c45be626f1b9 Nov 5 15:55:29.370580 containerd[1597]: 2025-11-05 15:55:29.122 [INFO][4157] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.20314b1aab8030c628af9f0a1e29809027d86ec133897387a211c45be626f1b9" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:29.370580 containerd[1597]: 2025-11-05 15:55:29.157 [INFO][4157] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.5/26] block=192.168.69.0/26 handle="k8s-pod-network.20314b1aab8030c628af9f0a1e29809027d86ec133897387a211c45be626f1b9" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:29.370580 containerd[1597]: 2025-11-05 15:55:29.157 [INFO][4157] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.5/26] handle="k8s-pod-network.20314b1aab8030c628af9f0a1e29809027d86ec133897387a211c45be626f1b9" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:29.370580 containerd[1597]: 2025-11-05 15:55:29.158 [INFO][4157] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:55:29.370580 containerd[1597]: 2025-11-05 15:55:29.160 [INFO][4157] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.5/26] IPv6=[] ContainerID="20314b1aab8030c628af9f0a1e29809027d86ec133897387a211c45be626f1b9" HandleID="k8s-pod-network.20314b1aab8030c628af9f0a1e29809027d86ec133897387a211c45be626f1b9" Workload="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--57995d6575--xtv6f-eth0" Nov 5 15:55:29.372725 containerd[1597]: 2025-11-05 15:55:29.178 [INFO][4078] cni-plugin/k8s.go 418: Populated endpoint ContainerID="20314b1aab8030c628af9f0a1e29809027d86ec133897387a211c45be626f1b9" Namespace="calico-apiserver" Pod="calico-apiserver-57995d6575-xtv6f" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--57995d6575--xtv6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--57995d6575--xtv6f-eth0", GenerateName:"calico-apiserver-57995d6575-", Namespace:"calico-apiserver", SelfLink:"", UID:"d5f77b74-d251-41a0-9423-d917b9539249", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 54, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57995d6575", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-e-b20d930803", ContainerID:"", Pod:"calico-apiserver-57995d6575-xtv6f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali882b8573945", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:29.372725 containerd[1597]: 2025-11-05 15:55:29.189 [INFO][4078] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.5/32] ContainerID="20314b1aab8030c628af9f0a1e29809027d86ec133897387a211c45be626f1b9" Namespace="calico-apiserver" Pod="calico-apiserver-57995d6575-xtv6f" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--57995d6575--xtv6f-eth0" Nov 5 15:55:29.372725 containerd[1597]: 2025-11-05 15:55:29.191 [INFO][4078] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali882b8573945 ContainerID="20314b1aab8030c628af9f0a1e29809027d86ec133897387a211c45be626f1b9" Namespace="calico-apiserver" Pod="calico-apiserver-57995d6575-xtv6f" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--57995d6575--xtv6f-eth0" Nov 5 15:55:29.372725 containerd[1597]: 2025-11-05 15:55:29.258 [INFO][4078] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20314b1aab8030c628af9f0a1e29809027d86ec133897387a211c45be626f1b9" Namespace="calico-apiserver" Pod="calico-apiserver-57995d6575-xtv6f" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--57995d6575--xtv6f-eth0" Nov 5 15:55:29.372725 containerd[1597]: 2025-11-05 15:55:29.259 [INFO][4078] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="20314b1aab8030c628af9f0a1e29809027d86ec133897387a211c45be626f1b9" Namespace="calico-apiserver" Pod="calico-apiserver-57995d6575-xtv6f" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--57995d6575--xtv6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--57995d6575--xtv6f-eth0", GenerateName:"calico-apiserver-57995d6575-", Namespace:"calico-apiserver", SelfLink:"", UID:"d5f77b74-d251-41a0-9423-d917b9539249", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 54, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57995d6575", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-e-b20d930803", ContainerID:"20314b1aab8030c628af9f0a1e29809027d86ec133897387a211c45be626f1b9", Pod:"calico-apiserver-57995d6575-xtv6f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali882b8573945", MAC:"12:09:11:32:6a:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:29.372725 containerd[1597]: 2025-11-05 15:55:29.362 [INFO][4078] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="20314b1aab8030c628af9f0a1e29809027d86ec133897387a211c45be626f1b9" Namespace="calico-apiserver" Pod="calico-apiserver-57995d6575-xtv6f" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--57995d6575--xtv6f-eth0" Nov 5 15:55:29.418649 containerd[1597]: time="2025-11-05T15:55:29.418576169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5fjkv,Uid:ca782dd5-c75b-4c0f-9e74-4db41ed6ac62,Namespace:calico-system,Attempt:0,}" Nov 5 15:55:29.424163 containerd[1597]: time="2025-11-05T15:55:29.423955447Z" level=info msg="connecting to shim 20314b1aab8030c628af9f0a1e29809027d86ec133897387a211c45be626f1b9" address="unix:///run/containerd/s/487ca3125a9822679f0630c8a55ff7b119e3396bbca7a0de6a390b4b3664edd9" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:55:29.478332 systemd-networkd[1499]: cali2f0ad578ab7: Gained IPv6LL Nov 5 15:55:29.496588 systemd-networkd[1499]: calid0c27e01900: Link UP Nov 5 15:55:29.500149 systemd-networkd[1499]: calid0c27e01900: Gained carrier Nov 5 15:55:29.525659 systemd[1]: Started cri-containerd-20314b1aab8030c628af9f0a1e29809027d86ec133897387a211c45be626f1b9.scope - libcontainer container 20314b1aab8030c628af9f0a1e29809027d86ec133897387a211c45be626f1b9. Nov 5 15:55:29.572116 containerd[1597]: 2025-11-05 15:55:28.805 [INFO][4065] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--e--b20d930803-k8s-coredns--668d6bf9bc--v4gkt-eth0 coredns-668d6bf9bc- kube-system a72b40de-cb8a-4802-8c34-e1af23d205bc 899 0 2025-11-05 15:54:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4487.0.1-e-b20d930803 coredns-668d6bf9bc-v4gkt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid0c27e01900 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="29cc79bfe9b6a06a79f28b1f06a671477171b410ea852631207093e7c617d654" Namespace="kube-system" Pod="coredns-668d6bf9bc-v4gkt" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-coredns--668d6bf9bc--v4gkt-" Nov 5 15:55:29.572116 containerd[1597]: 2025-11-05 15:55:28.808 [INFO][4065] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="29cc79bfe9b6a06a79f28b1f06a671477171b410ea852631207093e7c617d654" Namespace="kube-system" Pod="coredns-668d6bf9bc-v4gkt" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-coredns--668d6bf9bc--v4gkt-eth0" Nov 5 15:55:29.572116 containerd[1597]: 2025-11-05 15:55:28.972 [INFO][4199] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="29cc79bfe9b6a06a79f28b1f06a671477171b410ea852631207093e7c617d654" HandleID="k8s-pod-network.29cc79bfe9b6a06a79f28b1f06a671477171b410ea852631207093e7c617d654" Workload="ci--4487.0.1--e--b20d930803-k8s-coredns--668d6bf9bc--v4gkt-eth0" Nov 5 15:55:29.572116 containerd[1597]: 2025-11-05 15:55:28.973 [INFO][4199] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="29cc79bfe9b6a06a79f28b1f06a671477171b410ea852631207093e7c617d654" HandleID="k8s-pod-network.29cc79bfe9b6a06a79f28b1f06a671477171b410ea852631207093e7c617d654" Workload="ci--4487.0.1--e--b20d930803-k8s-coredns--668d6bf9bc--v4gkt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b7d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4487.0.1-e-b20d930803", "pod":"coredns-668d6bf9bc-v4gkt", "timestamp":"2025-11-05 15:55:28.972619188 +0000 UTC"}, Hostname:"ci-4487.0.1-e-b20d930803", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:55:29.572116 containerd[1597]: 2025-11-05 15:55:28.974 [INFO][4199] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:55:29.572116 containerd[1597]: 2025-11-05 15:55:29.157 [INFO][4199] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:55:29.572116 containerd[1597]: 2025-11-05 15:55:29.159 [INFO][4199] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-e-b20d930803' Nov 5 15:55:29.572116 containerd[1597]: 2025-11-05 15:55:29.190 [INFO][4199] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.29cc79bfe9b6a06a79f28b1f06a671477171b410ea852631207093e7c617d654" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:29.572116 containerd[1597]: 2025-11-05 15:55:29.312 [INFO][4199] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:29.572116 containerd[1597]: 2025-11-05 15:55:29.361 [INFO][4199] ipam/ipam.go 511: Trying affinity for 192.168.69.0/26 host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:29.572116 containerd[1597]: 2025-11-05 15:55:29.375 [INFO][4199] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.0/26 host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:29.572116 containerd[1597]: 2025-11-05 15:55:29.401 [INFO][4199] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:29.572116 containerd[1597]: 2025-11-05 15:55:29.401 [INFO][4199] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.29cc79bfe9b6a06a79f28b1f06a671477171b410ea852631207093e7c617d654" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:29.572116 containerd[1597]: 2025-11-05 15:55:29.416 [INFO][4199] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.29cc79bfe9b6a06a79f28b1f06a671477171b410ea852631207093e7c617d654 Nov 5 15:55:29.572116 containerd[1597]: 2025-11-05 15:55:29.439 [INFO][4199] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.29cc79bfe9b6a06a79f28b1f06a671477171b410ea852631207093e7c617d654" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:29.572116 containerd[1597]: 2025-11-05 15:55:29.460 [INFO][4199] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.6/26] block=192.168.69.0/26 handle="k8s-pod-network.29cc79bfe9b6a06a79f28b1f06a671477171b410ea852631207093e7c617d654" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:29.572116 containerd[1597]: 2025-11-05 15:55:29.460 [INFO][4199] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.6/26] handle="k8s-pod-network.29cc79bfe9b6a06a79f28b1f06a671477171b410ea852631207093e7c617d654" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:29.572116 containerd[1597]: 2025-11-05 15:55:29.461 [INFO][4199] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:55:29.572116 containerd[1597]: 2025-11-05 15:55:29.461 [INFO][4199] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.6/26] IPv6=[] ContainerID="29cc79bfe9b6a06a79f28b1f06a671477171b410ea852631207093e7c617d654" HandleID="k8s-pod-network.29cc79bfe9b6a06a79f28b1f06a671477171b410ea852631207093e7c617d654" Workload="ci--4487.0.1--e--b20d930803-k8s-coredns--668d6bf9bc--v4gkt-eth0" Nov 5 15:55:29.576583 containerd[1597]: 2025-11-05 15:55:29.484 [INFO][4065] cni-plugin/k8s.go 418: Populated endpoint ContainerID="29cc79bfe9b6a06a79f28b1f06a671477171b410ea852631207093e7c617d654" Namespace="kube-system" Pod="coredns-668d6bf9bc-v4gkt" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-coredns--668d6bf9bc--v4gkt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--e--b20d930803-k8s-coredns--668d6bf9bc--v4gkt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a72b40de-cb8a-4802-8c34-e1af23d205bc", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 54, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-e-b20d930803", ContainerID:"", Pod:"coredns-668d6bf9bc-v4gkt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid0c27e01900", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:29.576583 containerd[1597]: 2025-11-05 15:55:29.486 [INFO][4065] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.6/32] ContainerID="29cc79bfe9b6a06a79f28b1f06a671477171b410ea852631207093e7c617d654" Namespace="kube-system" Pod="coredns-668d6bf9bc-v4gkt" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-coredns--668d6bf9bc--v4gkt-eth0" Nov 5 15:55:29.576583 containerd[1597]: 2025-11-05 15:55:29.486 [INFO][4065] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid0c27e01900 ContainerID="29cc79bfe9b6a06a79f28b1f06a671477171b410ea852631207093e7c617d654" Namespace="kube-system" Pod="coredns-668d6bf9bc-v4gkt" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-coredns--668d6bf9bc--v4gkt-eth0" Nov 5 15:55:29.576583 containerd[1597]: 2025-11-05 15:55:29.520 [INFO][4065] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="29cc79bfe9b6a06a79f28b1f06a671477171b410ea852631207093e7c617d654" Namespace="kube-system" Pod="coredns-668d6bf9bc-v4gkt" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-coredns--668d6bf9bc--v4gkt-eth0" Nov 5 15:55:29.576583 containerd[1597]: 2025-11-05 15:55:29.525 [INFO][4065] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="29cc79bfe9b6a06a79f28b1f06a671477171b410ea852631207093e7c617d654" Namespace="kube-system" Pod="coredns-668d6bf9bc-v4gkt" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-coredns--668d6bf9bc--v4gkt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--e--b20d930803-k8s-coredns--668d6bf9bc--v4gkt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a72b40de-cb8a-4802-8c34-e1af23d205bc", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 54, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-e-b20d930803", ContainerID:"29cc79bfe9b6a06a79f28b1f06a671477171b410ea852631207093e7c617d654", Pod:"coredns-668d6bf9bc-v4gkt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid0c27e01900", MAC:"9e:dd:73:d6:99:ce", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:29.576583 containerd[1597]: 2025-11-05 15:55:29.556 [INFO][4065] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="29cc79bfe9b6a06a79f28b1f06a671477171b410ea852631207093e7c617d654" Namespace="kube-system" Pod="coredns-668d6bf9bc-v4gkt" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-coredns--668d6bf9bc--v4gkt-eth0" Nov 5 15:55:29.660912 containerd[1597]: time="2025-11-05T15:55:29.660238382Z" level=info msg="StartContainer for \"fe5d24e600561bea0879bd3ac6ae540ea2bcbf317eeeb3bb17fe15ff75087c4f\" returns successfully" Nov 5 15:55:29.700649 containerd[1597]: time="2025-11-05T15:55:29.700162950Z" level=info msg="connecting to shim 29cc79bfe9b6a06a79f28b1f06a671477171b410ea852631207093e7c617d654" address="unix:///run/containerd/s/e0d884d174625d941b04156b1bca16c418055ae4f3024dd51c4f2e4b374a925b" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:55:29.769976 systemd[1]: Started cri-containerd-29cc79bfe9b6a06a79f28b1f06a671477171b410ea852631207093e7c617d654.scope - libcontainer container 29cc79bfe9b6a06a79f28b1f06a671477171b410ea852631207093e7c617d654. Nov 5 15:55:29.852209 kubelet[2757]: E1105 15:55:29.851278 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:29.909702 containerd[1597]: time="2025-11-05T15:55:29.909383607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57995d6575-xst49,Uid:ddc61783-6e23-40f0-a07f-5214382089f3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3f22118fd7e30f79846dccb3a5676bb04525dd54469aac2ddae56eb77f210bf7\"" Nov 5 15:55:29.914412 containerd[1597]: time="2025-11-05T15:55:29.914131779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:55:29.917490 kubelet[2757]: I1105 15:55:29.917411 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6nfsn" podStartSLOduration=41.917359682 podStartE2EDuration="41.917359682s" podCreationTimestamp="2025-11-05 15:54:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:55:29.916227525 +0000 UTC m=+45.722697733" watchObservedRunningTime="2025-11-05 15:55:29.917359682 +0000 UTC m=+45.723829914" Nov 5 15:55:29.924054 systemd-networkd[1499]: calid2ff80ccc8d: Gained IPv6LL Nov 5 15:55:30.056735 containerd[1597]: time="2025-11-05T15:55:30.056658388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rjpbz,Uid:aa9bd767-dbec-475c-8411-c4b48f98eada,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e1ce445f4afa33e1465c4c9d0c18fbb9b9513f078882e6f9dd427aed96db1f8\"" Nov 5 15:55:30.094430 containerd[1597]: time="2025-11-05T15:55:30.093835974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v4gkt,Uid:a72b40de-cb8a-4802-8c34-e1af23d205bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"29cc79bfe9b6a06a79f28b1f06a671477171b410ea852631207093e7c617d654\"" Nov 5 15:55:30.097166 kubelet[2757]: E1105 15:55:30.097110 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:30.140058 systemd-networkd[1499]: cali5c435ce754d: Link UP Nov 5 15:55:30.160423 containerd[1597]: time="2025-11-05T15:55:30.158134557Z" level=info msg="CreateContainer within sandbox \"29cc79bfe9b6a06a79f28b1f06a671477171b410ea852631207093e7c617d654\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:55:30.159721 systemd-networkd[1499]: cali5c435ce754d: Gained carrier Nov 5 15:55:30.218784 containerd[1597]: time="2025-11-05T15:55:30.216662388Z" level=info msg="Container 3508103c6754e3dcd7a43ad730426b1806b634d47418fa4178c28414586b7192: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:55:30.248568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1532820694.mount: Deactivated successfully. Nov 5 15:55:30.254647 containerd[1597]: 2025-11-05 15:55:29.744 [INFO][4362] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--e--b20d930803-k8s-csi--node--driver--5fjkv-eth0 csi-node-driver- calico-system ca782dd5-c75b-4c0f-9e74-4db41ed6ac62 776 0 2025-11-05 15:55:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4487.0.1-e-b20d930803 csi-node-driver-5fjkv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5c435ce754d [] [] }} ContainerID="5843bd070b3917ab13e72b2c373e352456d63d4634b4c1b56628eea734c47230" Namespace="calico-system" Pod="csi-node-driver-5fjkv" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-csi--node--driver--5fjkv-" Nov 5 15:55:30.254647 containerd[1597]: 2025-11-05 15:55:29.745 [INFO][4362] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5843bd070b3917ab13e72b2c373e352456d63d4634b4c1b56628eea734c47230" Namespace="calico-system" Pod="csi-node-driver-5fjkv" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-csi--node--driver--5fjkv-eth0" Nov 5 15:55:30.254647 containerd[1597]: 2025-11-05 15:55:29.906 [INFO][4425] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5843bd070b3917ab13e72b2c373e352456d63d4634b4c1b56628eea734c47230" HandleID="k8s-pod-network.5843bd070b3917ab13e72b2c373e352456d63d4634b4c1b56628eea734c47230" Workload="ci--4487.0.1--e--b20d930803-k8s-csi--node--driver--5fjkv-eth0" Nov 5 15:55:30.254647 containerd[1597]: 2025-11-05 15:55:29.912 [INFO][4425] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5843bd070b3917ab13e72b2c373e352456d63d4634b4c1b56628eea734c47230" HandleID="k8s-pod-network.5843bd070b3917ab13e72b2c373e352456d63d4634b4c1b56628eea734c47230" Workload="ci--4487.0.1--e--b20d930803-k8s-csi--node--driver--5fjkv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000331bf0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.1-e-b20d930803", "pod":"csi-node-driver-5fjkv", "timestamp":"2025-11-05 15:55:29.906988052 +0000 UTC"}, Hostname:"ci-4487.0.1-e-b20d930803", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:55:30.254647 containerd[1597]: 2025-11-05 15:55:29.913 [INFO][4425] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:55:30.254647 containerd[1597]: 2025-11-05 15:55:29.913 [INFO][4425] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:55:30.254647 containerd[1597]: 2025-11-05 15:55:29.913 [INFO][4425] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-e-b20d930803' Nov 5 15:55:30.254647 containerd[1597]: 2025-11-05 15:55:29.952 [INFO][4425] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5843bd070b3917ab13e72b2c373e352456d63d4634b4c1b56628eea734c47230" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:30.254647 containerd[1597]: 2025-11-05 15:55:29.968 [INFO][4425] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:30.254647 containerd[1597]: 2025-11-05 15:55:29.983 [INFO][4425] ipam/ipam.go 511: Trying affinity for 192.168.69.0/26 host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:30.254647 containerd[1597]: 2025-11-05 15:55:29.995 [INFO][4425] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.0/26 host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:30.254647 containerd[1597]: 2025-11-05 15:55:30.022 [INFO][4425] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:30.254647 containerd[1597]: 2025-11-05 15:55:30.023 [INFO][4425] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.5843bd070b3917ab13e72b2c373e352456d63d4634b4c1b56628eea734c47230" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:30.254647 containerd[1597]: 2025-11-05 15:55:30.037 [INFO][4425] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5843bd070b3917ab13e72b2c373e352456d63d4634b4c1b56628eea734c47230 Nov 5 15:55:30.254647 containerd[1597]: 2025-11-05 15:55:30.054 [INFO][4425] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.5843bd070b3917ab13e72b2c373e352456d63d4634b4c1b56628eea734c47230" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:30.254647 containerd[1597]: 2025-11-05 15:55:30.099 [INFO][4425] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.7/26] block=192.168.69.0/26 handle="k8s-pod-network.5843bd070b3917ab13e72b2c373e352456d63d4634b4c1b56628eea734c47230" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:30.254647 containerd[1597]: 2025-11-05 15:55:30.100 [INFO][4425] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.7/26] handle="k8s-pod-network.5843bd070b3917ab13e72b2c373e352456d63d4634b4c1b56628eea734c47230" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:30.254647 containerd[1597]: 2025-11-05 15:55:30.100 [INFO][4425] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:55:30.254647 containerd[1597]: 2025-11-05 15:55:30.100 [INFO][4425] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.7/26] IPv6=[] ContainerID="5843bd070b3917ab13e72b2c373e352456d63d4634b4c1b56628eea734c47230" HandleID="k8s-pod-network.5843bd070b3917ab13e72b2c373e352456d63d4634b4c1b56628eea734c47230" Workload="ci--4487.0.1--e--b20d930803-k8s-csi--node--driver--5fjkv-eth0" Nov 5 15:55:30.258283 containerd[1597]: 2025-11-05 15:55:30.116 [INFO][4362] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5843bd070b3917ab13e72b2c373e352456d63d4634b4c1b56628eea734c47230" Namespace="calico-system" Pod="csi-node-driver-5fjkv" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-csi--node--driver--5fjkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--e--b20d930803-k8s-csi--node--driver--5fjkv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ca782dd5-c75b-4c0f-9e74-4db41ed6ac62", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 55, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-e-b20d930803", ContainerID:"", Pod:"csi-node-driver-5fjkv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5c435ce754d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:30.258283 containerd[1597]: 2025-11-05 15:55:30.117 [INFO][4362] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.7/32] ContainerID="5843bd070b3917ab13e72b2c373e352456d63d4634b4c1b56628eea734c47230" Namespace="calico-system" Pod="csi-node-driver-5fjkv" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-csi--node--driver--5fjkv-eth0" Nov 5 15:55:30.258283 containerd[1597]: 2025-11-05 15:55:30.117 [INFO][4362] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c435ce754d ContainerID="5843bd070b3917ab13e72b2c373e352456d63d4634b4c1b56628eea734c47230" Namespace="calico-system" Pod="csi-node-driver-5fjkv" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-csi--node--driver--5fjkv-eth0" Nov 5 15:55:30.258283 containerd[1597]: 2025-11-05 15:55:30.167 [INFO][4362] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5843bd070b3917ab13e72b2c373e352456d63d4634b4c1b56628eea734c47230" Namespace="calico-system" Pod="csi-node-driver-5fjkv" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-csi--node--driver--5fjkv-eth0" Nov 5 15:55:30.258283 containerd[1597]: 2025-11-05 15:55:30.172 [INFO][4362] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5843bd070b3917ab13e72b2c373e352456d63d4634b4c1b56628eea734c47230" Namespace="calico-system" Pod="csi-node-driver-5fjkv" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-csi--node--driver--5fjkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--e--b20d930803-k8s-csi--node--driver--5fjkv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ca782dd5-c75b-4c0f-9e74-4db41ed6ac62", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 55, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-e-b20d930803", ContainerID:"5843bd070b3917ab13e72b2c373e352456d63d4634b4c1b56628eea734c47230", Pod:"csi-node-driver-5fjkv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5c435ce754d", MAC:"5a:8f:0f:56:20:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:30.258283 containerd[1597]: 2025-11-05 15:55:30.235 [INFO][4362] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5843bd070b3917ab13e72b2c373e352456d63d4634b4c1b56628eea734c47230" Namespace="calico-system" Pod="csi-node-driver-5fjkv" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-csi--node--driver--5fjkv-eth0" Nov 5 15:55:30.263179 containerd[1597]: time="2025-11-05T15:55:30.262313558Z" level=info msg="CreateContainer within sandbox \"29cc79bfe9b6a06a79f28b1f06a671477171b410ea852631207093e7c617d654\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3508103c6754e3dcd7a43ad730426b1806b634d47418fa4178c28414586b7192\"" Nov 5 15:55:30.271381 containerd[1597]: time="2025-11-05T15:55:30.270129291Z" level=info msg="StartContainer for \"3508103c6754e3dcd7a43ad730426b1806b634d47418fa4178c28414586b7192\"" Nov 5 15:55:30.282978 containerd[1597]: time="2025-11-05T15:55:30.282692596Z" level=info msg="connecting to shim 3508103c6754e3dcd7a43ad730426b1806b634d47418fa4178c28414586b7192" address="unix:///run/containerd/s/e0d884d174625d941b04156b1bca16c418055ae4f3024dd51c4f2e4b374a925b" protocol=ttrpc version=3 Nov 5 15:55:30.308318 containerd[1597]: time="2025-11-05T15:55:30.307305777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d4fd8787-9gsmz,Uid:edd0e550-b3db-4c4b-b6a7-951d0aaecf72,Namespace:calico-system,Attempt:0,} returns sandbox id \"02f147708073af8d448b52ea30cbdaaa61e212c33553ae456f1ab06bc6ab10bd\"" Nov 5 15:55:30.339130 containerd[1597]: time="2025-11-05T15:55:30.339076630Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:30.341608 containerd[1597]: time="2025-11-05T15:55:30.341559566Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:55:30.341905 containerd[1597]: time="2025-11-05T15:55:30.341852266Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:55:30.343494 kubelet[2757]: E1105 15:55:30.343156 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:55:30.344707 kubelet[2757]: E1105 15:55:30.343728 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:55:30.347469 containerd[1597]: time="2025-11-05T15:55:30.345525609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:55:30.352084 containerd[1597]: time="2025-11-05T15:55:30.351637495Z" level=info msg="connecting to shim 5843bd070b3917ab13e72b2c373e352456d63d4634b4c1b56628eea734c47230" address="unix:///run/containerd/s/c70bbad774157f86de38498d725faae81394525c6c838ba4fcbbed30f95a7497" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:55:30.361201 kubelet[2757]: E1105 15:55:30.361064 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kh6kd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-57995d6575-xst49_calico-apiserver(ddc61783-6e23-40f0-a07f-5214382089f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:30.363090 kubelet[2757]: E1105 15:55:30.362524 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57995d6575-xst49" podUID="ddc61783-6e23-40f0-a07f-5214382089f3" Nov 5 15:55:30.382741 systemd[1]: Started cri-containerd-3508103c6754e3dcd7a43ad730426b1806b634d47418fa4178c28414586b7192.scope - libcontainer container 3508103c6754e3dcd7a43ad730426b1806b634d47418fa4178c28414586b7192. Nov 5 15:55:30.423064 containerd[1597]: time="2025-11-05T15:55:30.422731152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8547764-tm5md,Uid:f1d50c3f-0506-4ceb-8aba-ac1f5be110f0,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:55:30.438997 systemd[1]: Started cri-containerd-5843bd070b3917ab13e72b2c373e352456d63d4634b4c1b56628eea734c47230.scope - libcontainer container 5843bd070b3917ab13e72b2c373e352456d63d4634b4c1b56628eea734c47230. Nov 5 15:55:30.529912 containerd[1597]: time="2025-11-05T15:55:30.529868728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57995d6575-xtv6f,Uid:d5f77b74-d251-41a0-9423-d917b9539249,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"20314b1aab8030c628af9f0a1e29809027d86ec133897387a211c45be626f1b9\"" Nov 5 15:55:30.560891 containerd[1597]: time="2025-11-05T15:55:30.560828159Z" level=info msg="StartContainer for \"3508103c6754e3dcd7a43ad730426b1806b634d47418fa4178c28414586b7192\" returns successfully" Nov 5 15:55:30.617078 containerd[1597]: time="2025-11-05T15:55:30.617008628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5fjkv,Uid:ca782dd5-c75b-4c0f-9e74-4db41ed6ac62,Namespace:calico-system,Attempt:0,} returns sandbox id \"5843bd070b3917ab13e72b2c373e352456d63d4634b4c1b56628eea734c47230\"" Nov 5 15:55:30.627657 systemd-networkd[1499]: calid0c27e01900: Gained IPv6LL Nov 5 15:55:30.747222 systemd-networkd[1499]: cali14376bc3c10: Link UP Nov 5 15:55:30.748640 systemd-networkd[1499]: cali14376bc3c10: Gained carrier Nov 5 15:55:30.755669 systemd-networkd[1499]: cali3a8e5f383f7: Gained IPv6LL Nov 5 15:55:30.756016 systemd-networkd[1499]: cali882b8573945: Gained IPv6LL Nov 5 15:55:30.771289 containerd[1597]: 2025-11-05 15:55:30.591 [INFO][4527] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--c8547764--tm5md-eth0 calico-apiserver-c8547764- calico-apiserver f1d50c3f-0506-4ceb-8aba-ac1f5be110f0 894 0 2025-11-05 15:54:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c8547764 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4487.0.1-e-b20d930803 calico-apiserver-c8547764-tm5md eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali14376bc3c10 [] [] }} ContainerID="d1c2b150d39c1a50003daced7c0213334248f6741b2c7092a7da33e82fbf0104" Namespace="calico-apiserver" Pod="calico-apiserver-c8547764-tm5md" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--c8547764--tm5md-" Nov 5 15:55:30.771289 containerd[1597]: 2025-11-05 15:55:30.591 [INFO][4527] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d1c2b150d39c1a50003daced7c0213334248f6741b2c7092a7da33e82fbf0104" Namespace="calico-apiserver" Pod="calico-apiserver-c8547764-tm5md" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--c8547764--tm5md-eth0" Nov 5 15:55:30.771289 containerd[1597]: 2025-11-05 15:55:30.667 [INFO][4570] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d1c2b150d39c1a50003daced7c0213334248f6741b2c7092a7da33e82fbf0104" HandleID="k8s-pod-network.d1c2b150d39c1a50003daced7c0213334248f6741b2c7092a7da33e82fbf0104" Workload="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--c8547764--tm5md-eth0" Nov 5 15:55:30.771289 containerd[1597]: 2025-11-05 15:55:30.668 [INFO][4570] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d1c2b150d39c1a50003daced7c0213334248f6741b2c7092a7da33e82fbf0104" HandleID="k8s-pod-network.d1c2b150d39c1a50003daced7c0213334248f6741b2c7092a7da33e82fbf0104" Workload="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--c8547764--tm5md-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000315a80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4487.0.1-e-b20d930803", "pod":"calico-apiserver-c8547764-tm5md", "timestamp":"2025-11-05 15:55:30.667243323 +0000 UTC"}, Hostname:"ci-4487.0.1-e-b20d930803", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:55:30.771289 containerd[1597]: 2025-11-05 15:55:30.668 [INFO][4570] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:55:30.771289 containerd[1597]: 2025-11-05 15:55:30.668 [INFO][4570] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:55:30.771289 containerd[1597]: 2025-11-05 15:55:30.668 [INFO][4570] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-e-b20d930803' Nov 5 15:55:30.771289 containerd[1597]: 2025-11-05 15:55:30.691 [INFO][4570] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d1c2b150d39c1a50003daced7c0213334248f6741b2c7092a7da33e82fbf0104" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:30.771289 containerd[1597]: 2025-11-05 15:55:30.700 [INFO][4570] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:30.771289 containerd[1597]: 2025-11-05 15:55:30.708 [INFO][4570] ipam/ipam.go 511: Trying affinity for 192.168.69.0/26 host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:30.771289 containerd[1597]: 2025-11-05 15:55:30.711 [INFO][4570] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.0/26 host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:30.771289 containerd[1597]: 2025-11-05 15:55:30.715 [INFO][4570] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:30.771289 containerd[1597]: 2025-11-05 15:55:30.715 [INFO][4570] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.d1c2b150d39c1a50003daced7c0213334248f6741b2c7092a7da33e82fbf0104" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:30.771289 containerd[1597]: 2025-11-05 15:55:30.719 [INFO][4570] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d1c2b150d39c1a50003daced7c0213334248f6741b2c7092a7da33e82fbf0104 Nov 5 15:55:30.771289 containerd[1597]: 2025-11-05 15:55:30.727 [INFO][4570] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.d1c2b150d39c1a50003daced7c0213334248f6741b2c7092a7da33e82fbf0104" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:30.771289 containerd[1597]: 2025-11-05 15:55:30.738 [INFO][4570] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.8/26] block=192.168.69.0/26 handle="k8s-pod-network.d1c2b150d39c1a50003daced7c0213334248f6741b2c7092a7da33e82fbf0104" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:30.771289 containerd[1597]: 2025-11-05 15:55:30.739 [INFO][4570] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.8/26] handle="k8s-pod-network.d1c2b150d39c1a50003daced7c0213334248f6741b2c7092a7da33e82fbf0104" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:30.771289 containerd[1597]: 2025-11-05 15:55:30.739 [INFO][4570] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:55:30.771289 containerd[1597]: 2025-11-05 15:55:30.739 [INFO][4570] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.8/26] IPv6=[] ContainerID="d1c2b150d39c1a50003daced7c0213334248f6741b2c7092a7da33e82fbf0104" HandleID="k8s-pod-network.d1c2b150d39c1a50003daced7c0213334248f6741b2c7092a7da33e82fbf0104" Workload="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--c8547764--tm5md-eth0" Nov 5 15:55:30.772710 containerd[1597]: 2025-11-05 15:55:30.742 [INFO][4527] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d1c2b150d39c1a50003daced7c0213334248f6741b2c7092a7da33e82fbf0104" Namespace="calico-apiserver" Pod="calico-apiserver-c8547764-tm5md" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--c8547764--tm5md-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--c8547764--tm5md-eth0", GenerateName:"calico-apiserver-c8547764-", Namespace:"calico-apiserver", SelfLink:"", UID:"f1d50c3f-0506-4ceb-8aba-ac1f5be110f0", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 54, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c8547764", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-e-b20d930803", ContainerID:"", Pod:"calico-apiserver-c8547764-tm5md", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali14376bc3c10", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:30.772710 containerd[1597]: 2025-11-05 15:55:30.742 [INFO][4527] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.8/32] ContainerID="d1c2b150d39c1a50003daced7c0213334248f6741b2c7092a7da33e82fbf0104" Namespace="calico-apiserver" Pod="calico-apiserver-c8547764-tm5md" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--c8547764--tm5md-eth0" Nov 5 15:55:30.772710 containerd[1597]: 2025-11-05 15:55:30.742 [INFO][4527] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali14376bc3c10 ContainerID="d1c2b150d39c1a50003daced7c0213334248f6741b2c7092a7da33e82fbf0104" Namespace="calico-apiserver" Pod="calico-apiserver-c8547764-tm5md" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--c8547764--tm5md-eth0" Nov 5 15:55:30.772710 containerd[1597]: 2025-11-05 15:55:30.747 [INFO][4527] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d1c2b150d39c1a50003daced7c0213334248f6741b2c7092a7da33e82fbf0104" Namespace="calico-apiserver" Pod="calico-apiserver-c8547764-tm5md" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--c8547764--tm5md-eth0" Nov 5 15:55:30.772710 containerd[1597]: 2025-11-05 15:55:30.747 [INFO][4527] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d1c2b150d39c1a50003daced7c0213334248f6741b2c7092a7da33e82fbf0104" Namespace="calico-apiserver" Pod="calico-apiserver-c8547764-tm5md" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--c8547764--tm5md-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--c8547764--tm5md-eth0", GenerateName:"calico-apiserver-c8547764-", Namespace:"calico-apiserver", SelfLink:"", UID:"f1d50c3f-0506-4ceb-8aba-ac1f5be110f0", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 54, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c8547764", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-e-b20d930803", ContainerID:"d1c2b150d39c1a50003daced7c0213334248f6741b2c7092a7da33e82fbf0104", Pod:"calico-apiserver-c8547764-tm5md", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali14376bc3c10", MAC:"12:62:30:9a:43:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:30.772710 containerd[1597]: 2025-11-05 15:55:30.768 [INFO][4527] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d1c2b150d39c1a50003daced7c0213334248f6741b2c7092a7da33e82fbf0104" Namespace="calico-apiserver" Pod="calico-apiserver-c8547764-tm5md" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--apiserver--c8547764--tm5md-eth0" Nov 5 15:55:30.804862 containerd[1597]: time="2025-11-05T15:55:30.804699648Z" level=info msg="connecting to shim d1c2b150d39c1a50003daced7c0213334248f6741b2c7092a7da33e82fbf0104" address="unix:///run/containerd/s/743a1d7a20c4297b7c7a8ed080dba9a1abf00776529aac7f3f51d151fc927261" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:55:30.856759 systemd[1]: Started cri-containerd-d1c2b150d39c1a50003daced7c0213334248f6741b2c7092a7da33e82fbf0104.scope - libcontainer container d1c2b150d39c1a50003daced7c0213334248f6741b2c7092a7da33e82fbf0104. Nov 5 15:55:30.870326 kubelet[2757]: E1105 15:55:30.870150 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:30.877629 kubelet[2757]: E1105 15:55:30.877442 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57995d6575-xst49" podUID="ddc61783-6e23-40f0-a07f-5214382089f3" Nov 5 15:55:30.882143 kubelet[2757]: E1105 15:55:30.882106 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:30.893413 kubelet[2757]: I1105 15:55:30.892587 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-v4gkt" podStartSLOduration=42.892568306 podStartE2EDuration="42.892568306s" podCreationTimestamp="2025-11-05 15:54:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:55:30.891724043 +0000 UTC m=+46.698194263" watchObservedRunningTime="2025-11-05 15:55:30.892568306 +0000 UTC m=+46.699038510" Nov 5 15:55:30.975853 containerd[1597]: time="2025-11-05T15:55:30.975798474Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:30.976718 containerd[1597]: time="2025-11-05T15:55:30.976675633Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:55:30.976990 containerd[1597]: time="2025-11-05T15:55:30.976748100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:55:30.978095 kubelet[2757]: E1105 15:55:30.977992 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:55:30.978424 kubelet[2757]: E1105 15:55:30.978232 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:55:30.978923 containerd[1597]: time="2025-11-05T15:55:30.978684061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:55:30.978989 kubelet[2757]: E1105 15:55:30.978754 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wt9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rjpbz_calico-system(aa9bd767-dbec-475c-8411-c4b48f98eada): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:30.980941 kubelet[2757]: E1105 15:55:30.980524 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rjpbz" podUID="aa9bd767-dbec-475c-8411-c4b48f98eada" Nov 5 15:55:31.100728 containerd[1597]: time="2025-11-05T15:55:31.100675384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8547764-tm5md,Uid:f1d50c3f-0506-4ceb-8aba-ac1f5be110f0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d1c2b150d39c1a50003daced7c0213334248f6741b2c7092a7da33e82fbf0104\"" Nov 5 15:55:31.267307 systemd-networkd[1499]: vxlan.calico: Link UP Nov 5 15:55:31.267317 systemd-networkd[1499]: vxlan.calico: Gained carrier Nov 5 15:55:31.268655 systemd-networkd[1499]: cali5c435ce754d: Gained IPv6LL Nov 5 15:55:31.310757 containerd[1597]: time="2025-11-05T15:55:31.310692542Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:31.311591 containerd[1597]: time="2025-11-05T15:55:31.311511859Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:55:31.312462 containerd[1597]: time="2025-11-05T15:55:31.311528278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:55:31.312847 kubelet[2757]: E1105 15:55:31.312793 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:55:31.313506 kubelet[2757]: E1105 15:55:31.312861 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:55:31.313610 containerd[1597]: time="2025-11-05T15:55:31.313281080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:55:31.314860 kubelet[2757]: E1105 15:55:31.314621 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c09760b7d2ff444a8ecf03cdbfb0da0f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lplbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d4fd8787-9gsmz_calico-system(edd0e550-b3db-4c4b-b6a7-951d0aaecf72): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:31.413520 containerd[1597]: time="2025-11-05T15:55:31.413349022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b9468c484-bwwkq,Uid:86b40fbc-18e1-4614-aac7-5268cc15773b,Namespace:calico-system,Attempt:0,}" Nov 5 15:55:31.640849 systemd-networkd[1499]: calif93dfe30bf1: Link UP Nov 5 15:55:31.643954 systemd-networkd[1499]: calif93dfe30bf1: Gained carrier Nov 5 15:55:31.663008 containerd[1597]: time="2025-11-05T15:55:31.662586914Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:31.668300 containerd[1597]: time="2025-11-05T15:55:31.668107488Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:55:31.669475 containerd[1597]: time="2025-11-05T15:55:31.668630623Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:55:31.670043 kubelet[2757]: E1105 15:55:31.669872 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:55:31.670299 kubelet[2757]: E1105 15:55:31.670267 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:55:31.671733 containerd[1597]: time="2025-11-05T15:55:31.671640332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:55:31.672434 kubelet[2757]: E1105 15:55:31.672353 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ccbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-57995d6575-xtv6f_calico-apiserver(d5f77b74-d251-41a0-9423-d917b9539249): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:31.673829 kubelet[2757]: E1105 15:55:31.673757 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57995d6575-xtv6f" podUID="d5f77b74-d251-41a0-9423-d917b9539249" Nov 5 15:55:31.686399 containerd[1597]: 2025-11-05 15:55:31.492 [INFO][4671] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--e--b20d930803-k8s-calico--kube--controllers--7b9468c484--bwwkq-eth0 calico-kube-controllers-7b9468c484- calico-system 86b40fbc-18e1-4614-aac7-5268cc15773b 900 0 2025-11-05 15:55:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b9468c484 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4487.0.1-e-b20d930803 calico-kube-controllers-7b9468c484-bwwkq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif93dfe30bf1 [] [] }} ContainerID="a7d4373d8fed09a80e5ed1300a70b6b91871723bb7d1782768ff3193dd96b3cb" Namespace="calico-system" Pod="calico-kube-controllers-7b9468c484-bwwkq" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--kube--controllers--7b9468c484--bwwkq-" Nov 5 15:55:31.686399 containerd[1597]: 2025-11-05 15:55:31.493 [INFO][4671] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a7d4373d8fed09a80e5ed1300a70b6b91871723bb7d1782768ff3193dd96b3cb" Namespace="calico-system" Pod="calico-kube-controllers-7b9468c484-bwwkq" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--kube--controllers--7b9468c484--bwwkq-eth0" Nov 5 15:55:31.686399 containerd[1597]: 2025-11-05 15:55:31.569 [INFO][4682] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7d4373d8fed09a80e5ed1300a70b6b91871723bb7d1782768ff3193dd96b3cb" HandleID="k8s-pod-network.a7d4373d8fed09a80e5ed1300a70b6b91871723bb7d1782768ff3193dd96b3cb" Workload="ci--4487.0.1--e--b20d930803-k8s-calico--kube--controllers--7b9468c484--bwwkq-eth0" Nov 5 15:55:31.686399 containerd[1597]: 2025-11-05 15:55:31.570 [INFO][4682] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a7d4373d8fed09a80e5ed1300a70b6b91871723bb7d1782768ff3193dd96b3cb" HandleID="k8s-pod-network.a7d4373d8fed09a80e5ed1300a70b6b91871723bb7d1782768ff3193dd96b3cb" Workload="ci--4487.0.1--e--b20d930803-k8s-calico--kube--controllers--7b9468c484--bwwkq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0006062a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.1-e-b20d930803", "pod":"calico-kube-controllers-7b9468c484-bwwkq", "timestamp":"2025-11-05 15:55:31.56977891 +0000 UTC"}, Hostname:"ci-4487.0.1-e-b20d930803", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:55:31.686399 containerd[1597]: 2025-11-05 15:55:31.570 [INFO][4682] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:55:31.686399 containerd[1597]: 2025-11-05 15:55:31.570 [INFO][4682] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:55:31.686399 containerd[1597]: 2025-11-05 15:55:31.570 [INFO][4682] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-e-b20d930803' Nov 5 15:55:31.686399 containerd[1597]: 2025-11-05 15:55:31.585 [INFO][4682] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a7d4373d8fed09a80e5ed1300a70b6b91871723bb7d1782768ff3193dd96b3cb" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:31.686399 containerd[1597]: 2025-11-05 15:55:31.594 [INFO][4682] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:31.686399 containerd[1597]: 2025-11-05 15:55:31.602 [INFO][4682] ipam/ipam.go 511: Trying affinity for 192.168.69.0/26 host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:31.686399 containerd[1597]: 2025-11-05 15:55:31.605 [INFO][4682] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.0/26 host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:31.686399 containerd[1597]: 2025-11-05 15:55:31.610 [INFO][4682] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:31.686399 containerd[1597]: 2025-11-05 15:55:31.610 [INFO][4682] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.a7d4373d8fed09a80e5ed1300a70b6b91871723bb7d1782768ff3193dd96b3cb" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:31.686399 containerd[1597]: 2025-11-05 15:55:31.612 [INFO][4682] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a7d4373d8fed09a80e5ed1300a70b6b91871723bb7d1782768ff3193dd96b3cb Nov 5 15:55:31.686399 containerd[1597]: 2025-11-05 15:55:31.619 [INFO][4682] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.a7d4373d8fed09a80e5ed1300a70b6b91871723bb7d1782768ff3193dd96b3cb" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:31.686399 containerd[1597]: 2025-11-05 15:55:31.626 [INFO][4682] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.9/26] block=192.168.69.0/26 handle="k8s-pod-network.a7d4373d8fed09a80e5ed1300a70b6b91871723bb7d1782768ff3193dd96b3cb" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:31.686399 containerd[1597]: 2025-11-05 15:55:31.627 [INFO][4682] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.9/26] handle="k8s-pod-network.a7d4373d8fed09a80e5ed1300a70b6b91871723bb7d1782768ff3193dd96b3cb" host="ci-4487.0.1-e-b20d930803" Nov 5 15:55:31.686399 containerd[1597]: 2025-11-05 15:55:31.627 [INFO][4682] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:55:31.686399 containerd[1597]: 2025-11-05 15:55:31.627 [INFO][4682] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.9/26] IPv6=[] ContainerID="a7d4373d8fed09a80e5ed1300a70b6b91871723bb7d1782768ff3193dd96b3cb" HandleID="k8s-pod-network.a7d4373d8fed09a80e5ed1300a70b6b91871723bb7d1782768ff3193dd96b3cb" Workload="ci--4487.0.1--e--b20d930803-k8s-calico--kube--controllers--7b9468c484--bwwkq-eth0" Nov 5 15:55:31.687921 containerd[1597]: 2025-11-05 15:55:31.632 [INFO][4671] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a7d4373d8fed09a80e5ed1300a70b6b91871723bb7d1782768ff3193dd96b3cb" Namespace="calico-system" Pod="calico-kube-controllers-7b9468c484-bwwkq" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--kube--controllers--7b9468c484--bwwkq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--e--b20d930803-k8s-calico--kube--controllers--7b9468c484--bwwkq-eth0", GenerateName:"calico-kube-controllers-7b9468c484-", Namespace:"calico-system", SelfLink:"", UID:"86b40fbc-18e1-4614-aac7-5268cc15773b", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 55, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b9468c484", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-e-b20d930803", ContainerID:"", Pod:"calico-kube-controllers-7b9468c484-bwwkq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif93dfe30bf1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:31.687921 containerd[1597]: 2025-11-05 15:55:31.632 [INFO][4671] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.9/32] ContainerID="a7d4373d8fed09a80e5ed1300a70b6b91871723bb7d1782768ff3193dd96b3cb" Namespace="calico-system" Pod="calico-kube-controllers-7b9468c484-bwwkq" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--kube--controllers--7b9468c484--bwwkq-eth0" Nov 5 15:55:31.687921 containerd[1597]: 2025-11-05 15:55:31.633 [INFO][4671] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif93dfe30bf1 ContainerID="a7d4373d8fed09a80e5ed1300a70b6b91871723bb7d1782768ff3193dd96b3cb" Namespace="calico-system" Pod="calico-kube-controllers-7b9468c484-bwwkq" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--kube--controllers--7b9468c484--bwwkq-eth0" Nov 5 15:55:31.687921 containerd[1597]: 2025-11-05 15:55:31.643 [INFO][4671] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a7d4373d8fed09a80e5ed1300a70b6b91871723bb7d1782768ff3193dd96b3cb" Namespace="calico-system" Pod="calico-kube-controllers-7b9468c484-bwwkq" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--kube--controllers--7b9468c484--bwwkq-eth0" Nov 5 15:55:31.687921 containerd[1597]: 2025-11-05 15:55:31.646 [INFO][4671] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a7d4373d8fed09a80e5ed1300a70b6b91871723bb7d1782768ff3193dd96b3cb" Namespace="calico-system" Pod="calico-kube-controllers-7b9468c484-bwwkq" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--kube--controllers--7b9468c484--bwwkq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--e--b20d930803-k8s-calico--kube--controllers--7b9468c484--bwwkq-eth0", GenerateName:"calico-kube-controllers-7b9468c484-", Namespace:"calico-system", SelfLink:"", UID:"86b40fbc-18e1-4614-aac7-5268cc15773b", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 55, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b9468c484", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-e-b20d930803", ContainerID:"a7d4373d8fed09a80e5ed1300a70b6b91871723bb7d1782768ff3193dd96b3cb", Pod:"calico-kube-controllers-7b9468c484-bwwkq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif93dfe30bf1", MAC:"fe:53:e9:cd:8b:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:31.687921 containerd[1597]: 2025-11-05 15:55:31.680 [INFO][4671] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a7d4373d8fed09a80e5ed1300a70b6b91871723bb7d1782768ff3193dd96b3cb" Namespace="calico-system" Pod="calico-kube-controllers-7b9468c484-bwwkq" WorkloadEndpoint="ci--4487.0.1--e--b20d930803-k8s-calico--kube--controllers--7b9468c484--bwwkq-eth0" Nov 5 15:55:31.758190 containerd[1597]: time="2025-11-05T15:55:31.758141819Z" level=info msg="connecting to shim a7d4373d8fed09a80e5ed1300a70b6b91871723bb7d1782768ff3193dd96b3cb" address="unix:///run/containerd/s/fffbea88212f50c8581493feaac48115754b9d93c5eb9230ca4fc2620d6e1f6b" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:55:31.812686 systemd[1]: Started cri-containerd-a7d4373d8fed09a80e5ed1300a70b6b91871723bb7d1782768ff3193dd96b3cb.scope - libcontainer container a7d4373d8fed09a80e5ed1300a70b6b91871723bb7d1782768ff3193dd96b3cb. Nov 5 15:55:31.894126 kubelet[2757]: E1105 15:55:31.894031 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:31.897408 kubelet[2757]: E1105 15:55:31.897125 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57995d6575-xtv6f" podUID="d5f77b74-d251-41a0-9423-d917b9539249" Nov 5 15:55:31.898052 kubelet[2757]: E1105 15:55:31.898022 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:31.898374 kubelet[2757]: E1105 15:55:31.898228 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57995d6575-xst49" podUID="ddc61783-6e23-40f0-a07f-5214382089f3" Nov 5 15:55:31.899955 kubelet[2757]: E1105 15:55:31.899648 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rjpbz" podUID="aa9bd767-dbec-475c-8411-c4b48f98eada" Nov 5 15:55:31.992007 containerd[1597]: time="2025-11-05T15:55:31.991730958Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:31.992917 containerd[1597]: time="2025-11-05T15:55:31.992733744Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:55:31.992917 containerd[1597]: time="2025-11-05T15:55:31.992780948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:55:31.993521 kubelet[2757]: E1105 15:55:31.993481 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:55:31.993799 kubelet[2757]: E1105 15:55:31.993767 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:55:31.995596 containerd[1597]: time="2025-11-05T15:55:31.995189155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:55:31.995841 kubelet[2757]: E1105 15:55:31.995482 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mqg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5fjkv_calico-system(ca782dd5-c75b-4c0f-9e74-4db41ed6ac62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:32.099750 systemd-networkd[1499]: cali14376bc3c10: Gained IPv6LL Nov 5 15:55:32.157415 containerd[1597]: time="2025-11-05T15:55:32.157359063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b9468c484-bwwkq,Uid:86b40fbc-18e1-4614-aac7-5268cc15773b,Namespace:calico-system,Attempt:0,} returns sandbox id \"a7d4373d8fed09a80e5ed1300a70b6b91871723bb7d1782768ff3193dd96b3cb\"" Nov 5 15:55:32.445080 containerd[1597]: time="2025-11-05T15:55:32.444949192Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:32.446569 containerd[1597]: time="2025-11-05T15:55:32.446505320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:55:32.446569 containerd[1597]: time="2025-11-05T15:55:32.446513423Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:55:32.447029 kubelet[2757]: E1105 15:55:32.446951 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:55:32.447122 kubelet[2757]: E1105 15:55:32.447061 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:55:32.447745 kubelet[2757]: E1105 15:55:32.447468 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5kb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c8547764-tm5md_calico-apiserver(f1d50c3f-0506-4ceb-8aba-ac1f5be110f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:32.449022 containerd[1597]: time="2025-11-05T15:55:32.448561961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:55:32.450311 kubelet[2757]: E1105 15:55:32.450150 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8547764-tm5md" podUID="f1d50c3f-0506-4ceb-8aba-ac1f5be110f0" Nov 5 15:55:32.834491 containerd[1597]: time="2025-11-05T15:55:32.834306229Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:32.835585 containerd[1597]: time="2025-11-05T15:55:32.835484358Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:55:32.835656 containerd[1597]: time="2025-11-05T15:55:32.835540745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:55:32.835951 kubelet[2757]: E1105 15:55:32.835891 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:55:32.836034 kubelet[2757]: E1105 15:55:32.835963 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:55:32.837345 kubelet[2757]: E1105 15:55:32.836537 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lplbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d4fd8787-9gsmz_calico-system(edd0e550-b3db-4c4b-b6a7-951d0aaecf72): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:32.838525 kubelet[2757]: E1105 15:55:32.838054 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d4fd8787-9gsmz" podUID="edd0e550-b3db-4c4b-b6a7-951d0aaecf72" Nov 5 15:55:32.839295 containerd[1597]: time="2025-11-05T15:55:32.838411134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:55:32.897259 kubelet[2757]: E1105 15:55:32.897193 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:32.898562 kubelet[2757]: E1105 15:55:32.897682 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:32.899967 kubelet[2757]: E1105 15:55:32.899660 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8547764-tm5md" podUID="f1d50c3f-0506-4ceb-8aba-ac1f5be110f0" Nov 5 15:55:32.900309 kubelet[2757]: E1105 15:55:32.900055 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d4fd8787-9gsmz" podUID="edd0e550-b3db-4c4b-b6a7-951d0aaecf72" Nov 5 15:55:33.060014 systemd-networkd[1499]: vxlan.calico: Gained IPv6LL Nov 5 15:55:33.187154 containerd[1597]: time="2025-11-05T15:55:33.186982869Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:33.190150 containerd[1597]: time="2025-11-05T15:55:33.189982880Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:55:33.190910 containerd[1597]: time="2025-11-05T15:55:33.190047336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:55:33.191094 kubelet[2757]: E1105 15:55:33.190835 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:55:33.191094 kubelet[2757]: E1105 15:55:33.190889 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:55:33.192060 kubelet[2757]: E1105 15:55:33.191257 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mqg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5fjkv_calico-system(ca782dd5-c75b-4c0f-9e74-4db41ed6ac62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:33.192580 containerd[1597]: time="2025-11-05T15:55:33.192214523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:55:33.193646 kubelet[2757]: E1105 15:55:33.193355 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5fjkv" podUID="ca782dd5-c75b-4c0f-9e74-4db41ed6ac62" Nov 5 15:55:33.509360 systemd-networkd[1499]: calif93dfe30bf1: Gained IPv6LL Nov 5 15:55:33.720023 containerd[1597]: time="2025-11-05T15:55:33.719916106Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:33.721160 containerd[1597]: time="2025-11-05T15:55:33.721054033Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:55:33.721160 containerd[1597]: time="2025-11-05T15:55:33.721115273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:55:33.721866 kubelet[2757]: E1105 15:55:33.721646 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:55:33.722341 kubelet[2757]: E1105 15:55:33.721898 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:55:33.725466 kubelet[2757]: E1105 15:55:33.725360 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-grzbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7b9468c484-bwwkq_calico-system(86b40fbc-18e1-4614-aac7-5268cc15773b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:33.726658 kubelet[2757]: E1105 15:55:33.726577 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b9468c484-bwwkq" podUID="86b40fbc-18e1-4614-aac7-5268cc15773b" Nov 5 15:55:33.904601 kubelet[2757]: E1105 15:55:33.904099 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b9468c484-bwwkq" podUID="86b40fbc-18e1-4614-aac7-5268cc15773b" Nov 5 15:55:33.907034 kubelet[2757]: E1105 15:55:33.906965 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5fjkv" podUID="ca782dd5-c75b-4c0f-9e74-4db41ed6ac62" Nov 5 15:55:40.827821 systemd[1]: Started sshd@7-134.199.212.97:22-139.178.68.195:41608.service - OpenSSH per-connection server daemon (139.178.68.195:41608). Nov 5 15:55:40.964138 sshd[4806]: Accepted publickey for core from 139.178.68.195 port 41608 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:55:40.966360 sshd-session[4806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:40.972348 systemd-logind[1561]: New session 8 of user core. Nov 5 15:55:40.980732 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 15:55:41.504382 sshd[4809]: Connection closed by 139.178.68.195 port 41608 Nov 5 15:55:41.505280 sshd-session[4806]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:41.512368 systemd[1]: sshd@7-134.199.212.97:22-139.178.68.195:41608.service: Deactivated successfully. Nov 5 15:55:41.517063 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 15:55:41.518196 systemd-logind[1561]: Session 8 logged out. Waiting for processes to exit. Nov 5 15:55:41.519867 systemd-logind[1561]: Removed session 8. Nov 5 15:55:43.414667 containerd[1597]: time="2025-11-05T15:55:43.414624541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:55:43.792722 containerd[1597]: time="2025-11-05T15:55:43.792659224Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:43.793985 containerd[1597]: time="2025-11-05T15:55:43.793823897Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:55:43.793985 containerd[1597]: time="2025-11-05T15:55:43.793917654Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:55:43.794211 kubelet[2757]: E1105 15:55:43.794153 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:55:43.794688 kubelet[2757]: E1105 15:55:43.794241 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:55:43.794688 kubelet[2757]: E1105 15:55:43.794523 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kh6kd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-57995d6575-xst49_calico-apiserver(ddc61783-6e23-40f0-a07f-5214382089f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:43.796285 kubelet[2757]: E1105 15:55:43.796225 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57995d6575-xst49" podUID="ddc61783-6e23-40f0-a07f-5214382089f3" Nov 5 15:55:44.418481 containerd[1597]: time="2025-11-05T15:55:44.417985958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:55:44.787864 containerd[1597]: time="2025-11-05T15:55:44.787798212Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:44.788806 containerd[1597]: time="2025-11-05T15:55:44.788730310Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:55:44.788976 containerd[1597]: time="2025-11-05T15:55:44.788866400Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:55:44.789376 kubelet[2757]: E1105 15:55:44.789307 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:55:44.789555 kubelet[2757]: E1105 15:55:44.789419 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:55:44.790218 kubelet[2757]: E1105 15:55:44.789737 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c09760b7d2ff444a8ecf03cdbfb0da0f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lplbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d4fd8787-9gsmz_calico-system(edd0e550-b3db-4c4b-b6a7-951d0aaecf72): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:44.790602 containerd[1597]: time="2025-11-05T15:55:44.789861019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:55:45.128351 containerd[1597]: time="2025-11-05T15:55:45.127842709Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:45.129136 containerd[1597]: time="2025-11-05T15:55:45.129077342Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:55:45.129346 containerd[1597]: time="2025-11-05T15:55:45.129314060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:55:45.129896 kubelet[2757]: E1105 15:55:45.129644 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:55:45.129896 kubelet[2757]: E1105 15:55:45.129708 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:55:45.130331 containerd[1597]: time="2025-11-05T15:55:45.130232638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:55:45.131072 kubelet[2757]: E1105 15:55:45.130901 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5kb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c8547764-tm5md_calico-apiserver(f1d50c3f-0506-4ceb-8aba-ac1f5be110f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:45.132226 kubelet[2757]: E1105 15:55:45.132173 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8547764-tm5md" podUID="f1d50c3f-0506-4ceb-8aba-ac1f5be110f0" Nov 5 15:55:45.488328 containerd[1597]: time="2025-11-05T15:55:45.488209708Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:45.490108 containerd[1597]: time="2025-11-05T15:55:45.489981806Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:55:45.490301 containerd[1597]: time="2025-11-05T15:55:45.490032152Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:55:45.490620 kubelet[2757]: E1105 15:55:45.490558 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:55:45.490724 kubelet[2757]: E1105 15:55:45.490635 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:55:45.491572 containerd[1597]: time="2025-11-05T15:55:45.491089827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:55:45.492542 kubelet[2757]: E1105 15:55:45.491990 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wt9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rjpbz_calico-system(aa9bd767-dbec-475c-8411-c4b48f98eada): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:45.496287 kubelet[2757]: E1105 15:55:45.496192 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rjpbz" podUID="aa9bd767-dbec-475c-8411-c4b48f98eada" Nov 5 15:55:45.830163 containerd[1597]: time="2025-11-05T15:55:45.829975234Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:45.831291 containerd[1597]: time="2025-11-05T15:55:45.831233488Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:55:45.831444 containerd[1597]: time="2025-11-05T15:55:45.831354554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:55:45.831732 kubelet[2757]: E1105 15:55:45.831676 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:55:45.831824 kubelet[2757]: E1105 15:55:45.831755 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:55:45.831934 kubelet[2757]: E1105 15:55:45.831894 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lplbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d4fd8787-9gsmz_calico-system(edd0e550-b3db-4c4b-b6a7-951d0aaecf72): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:45.833935 kubelet[2757]: E1105 15:55:45.833869 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d4fd8787-9gsmz" podUID="edd0e550-b3db-4c4b-b6a7-951d0aaecf72" Nov 5 15:55:46.525835 systemd[1]: Started sshd@8-134.199.212.97:22-139.178.68.195:54456.service - OpenSSH per-connection server daemon (139.178.68.195:54456). Nov 5 15:55:46.620124 sshd[4833]: Accepted publickey for core from 139.178.68.195 port 54456 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:55:46.623054 sshd-session[4833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:46.631012 systemd-logind[1561]: New session 9 of user core. Nov 5 15:55:46.637665 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 15:55:46.790259 sshd[4836]: Connection closed by 139.178.68.195 port 54456 Nov 5 15:55:46.791073 sshd-session[4833]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:46.798376 systemd[1]: sshd@8-134.199.212.97:22-139.178.68.195:54456.service: Deactivated successfully. Nov 5 15:55:46.802024 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 15:55:46.806543 systemd-logind[1561]: Session 9 logged out. Waiting for processes to exit. Nov 5 15:55:46.808138 systemd-logind[1561]: Removed session 9. Nov 5 15:55:47.417532 containerd[1597]: time="2025-11-05T15:55:47.415802595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:55:47.797411 containerd[1597]: time="2025-11-05T15:55:47.797285411Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:47.798169 containerd[1597]: time="2025-11-05T15:55:47.798109888Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:55:47.798433 containerd[1597]: time="2025-11-05T15:55:47.798148972Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:55:47.798531 kubelet[2757]: E1105 15:55:47.798436 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:55:47.798531 kubelet[2757]: E1105 15:55:47.798514 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:55:47.799048 kubelet[2757]: E1105 15:55:47.798893 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ccbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-57995d6575-xtv6f_calico-apiserver(d5f77b74-d251-41a0-9423-d917b9539249): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:47.800192 containerd[1597]: time="2025-11-05T15:55:47.799548358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:55:47.800401 kubelet[2757]: E1105 15:55:47.800070 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57995d6575-xtv6f" podUID="d5f77b74-d251-41a0-9423-d917b9539249" Nov 5 15:55:48.108152 containerd[1597]: time="2025-11-05T15:55:48.107963390Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:48.109897 containerd[1597]: time="2025-11-05T15:55:48.109792429Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:55:48.110177 containerd[1597]: time="2025-11-05T15:55:48.109841136Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:55:48.110511 kubelet[2757]: E1105 15:55:48.110407 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:55:48.110751 kubelet[2757]: E1105 15:55:48.110611 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:55:48.111316 containerd[1597]: time="2025-11-05T15:55:48.111240015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:55:48.111622 kubelet[2757]: E1105 15:55:48.111192 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-grzbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7b9468c484-bwwkq_calico-system(86b40fbc-18e1-4614-aac7-5268cc15773b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:48.113433 kubelet[2757]: E1105 15:55:48.113288 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b9468c484-bwwkq" podUID="86b40fbc-18e1-4614-aac7-5268cc15773b" Nov 5 15:55:48.474935 containerd[1597]: time="2025-11-05T15:55:48.474869076Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:48.476619 containerd[1597]: time="2025-11-05T15:55:48.476441788Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:55:48.476619 containerd[1597]: time="2025-11-05T15:55:48.476495680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:55:48.476839 kubelet[2757]: E1105 15:55:48.476761 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:55:48.476839 kubelet[2757]: E1105 15:55:48.476826 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:55:48.477552 kubelet[2757]: E1105 15:55:48.477339 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mqg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5fjkv_calico-system(ca782dd5-c75b-4c0f-9e74-4db41ed6ac62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:48.481726 containerd[1597]: time="2025-11-05T15:55:48.481623837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:55:48.897660 containerd[1597]: time="2025-11-05T15:55:48.897245519Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:48.898822 containerd[1597]: time="2025-11-05T15:55:48.898718509Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:55:48.898822 containerd[1597]: time="2025-11-05T15:55:48.898777294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:55:48.899656 kubelet[2757]: E1105 15:55:48.899101 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:55:48.899656 kubelet[2757]: E1105 15:55:48.899175 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:55:48.899656 kubelet[2757]: E1105 15:55:48.899374 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mqg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5fjkv_calico-system(ca782dd5-c75b-4c0f-9e74-4db41ed6ac62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:48.901676 kubelet[2757]: E1105 15:55:48.901590 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5fjkv" podUID="ca782dd5-c75b-4c0f-9e74-4db41ed6ac62" Nov 5 15:55:51.817041 systemd[1]: Started sshd@9-134.199.212.97:22-139.178.68.195:54468.service - OpenSSH per-connection server daemon (139.178.68.195:54468). Nov 5 15:55:51.896483 sshd[4852]: Accepted publickey for core from 139.178.68.195 port 54468 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:55:51.899236 sshd-session[4852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:51.907766 systemd-logind[1561]: New session 10 of user core. Nov 5 15:55:51.913832 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 15:55:52.161670 sshd[4855]: Connection closed by 139.178.68.195 port 54468 Nov 5 15:55:52.166637 sshd-session[4852]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:52.175127 systemd[1]: sshd@9-134.199.212.97:22-139.178.68.195:54468.service: Deactivated successfully. Nov 5 15:55:52.182354 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 15:55:52.184725 systemd-logind[1561]: Session 10 logged out. Waiting for processes to exit. Nov 5 15:55:52.189219 systemd-logind[1561]: Removed session 10. Nov 5 15:55:56.413436 kubelet[2757]: E1105 15:55:56.413318 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:56.418198 kubelet[2757]: E1105 15:55:56.418127 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57995d6575-xst49" podUID="ddc61783-6e23-40f0-a07f-5214382089f3" Nov 5 15:55:56.978640 containerd[1597]: time="2025-11-05T15:55:56.978253997Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f73116df706bbeeb154b115552f57ea4572bc23e1ba547d9f10f31250c2285d\" id:\"cc918f5828737141a296f63277b9cb4b1815af28e26d9dcd56f2b282dd49999c\" pid:4888 exited_at:{seconds:1762358156 nanos:976222538}" Nov 5 15:55:56.985908 kubelet[2757]: E1105 15:55:56.985839 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:57.182856 systemd[1]: Started sshd@10-134.199.212.97:22-139.178.68.195:57694.service - OpenSSH per-connection server daemon (139.178.68.195:57694). Nov 5 15:55:57.316940 sshd[4902]: Accepted publickey for core from 139.178.68.195 port 57694 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:55:57.319549 sshd-session[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:57.326628 systemd-logind[1561]: New session 11 of user core. Nov 5 15:55:57.332757 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 15:55:57.416715 kubelet[2757]: E1105 15:55:57.416630 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rjpbz" podUID="aa9bd767-dbec-475c-8411-c4b48f98eada" Nov 5 15:55:57.582062 sshd[4905]: Connection closed by 139.178.68.195 port 57694 Nov 5 15:55:57.583143 sshd-session[4902]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:57.594114 systemd[1]: sshd@10-134.199.212.97:22-139.178.68.195:57694.service: Deactivated successfully. Nov 5 15:55:57.597458 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 15:55:57.599025 systemd-logind[1561]: Session 11 logged out. Waiting for processes to exit. Nov 5 15:55:57.603289 systemd[1]: Started sshd@11-134.199.212.97:22-139.178.68.195:57702.service - OpenSSH per-connection server daemon (139.178.68.195:57702). Nov 5 15:55:57.605356 systemd-logind[1561]: Removed session 11. Nov 5 15:55:57.671349 sshd[4918]: Accepted publickey for core from 139.178.68.195 port 57702 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:55:57.673060 sshd-session[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:57.680550 systemd-logind[1561]: New session 12 of user core. Nov 5 15:55:57.686113 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 15:55:57.952695 sshd[4921]: Connection closed by 139.178.68.195 port 57702 Nov 5 15:55:57.954178 sshd-session[4918]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:57.969284 systemd[1]: sshd@11-134.199.212.97:22-139.178.68.195:57702.service: Deactivated successfully. Nov 5 15:55:57.976519 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 15:55:57.978957 systemd-logind[1561]: Session 12 logged out. Waiting for processes to exit. Nov 5 15:55:57.990512 systemd[1]: Started sshd@12-134.199.212.97:22-139.178.68.195:57706.service - OpenSSH per-connection server daemon (139.178.68.195:57706). Nov 5 15:55:57.992477 systemd-logind[1561]: Removed session 12. Nov 5 15:55:58.064455 sshd[4931]: Accepted publickey for core from 139.178.68.195 port 57706 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:55:58.066770 sshd-session[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:58.077687 systemd-logind[1561]: New session 13 of user core. Nov 5 15:55:58.083646 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 15:55:58.268575 sshd[4934]: Connection closed by 139.178.68.195 port 57706 Nov 5 15:55:58.269839 sshd-session[4931]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:58.277281 systemd-logind[1561]: Session 13 logged out. Waiting for processes to exit. Nov 5 15:55:58.278636 systemd[1]: sshd@12-134.199.212.97:22-139.178.68.195:57706.service: Deactivated successfully. Nov 5 15:55:58.283103 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 15:55:58.287477 systemd-logind[1561]: Removed session 13. Nov 5 15:55:58.415291 kubelet[2757]: E1105 15:55:58.415228 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b9468c484-bwwkq" podUID="86b40fbc-18e1-4614-aac7-5268cc15773b" Nov 5 15:55:59.413675 kubelet[2757]: E1105 15:55:59.413615 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:55:59.423439 kubelet[2757]: E1105 15:55:59.420844 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5fjkv" podUID="ca782dd5-c75b-4c0f-9e74-4db41ed6ac62" Nov 5 15:55:59.425274 kubelet[2757]: E1105 15:55:59.425196 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d4fd8787-9gsmz" podUID="edd0e550-b3db-4c4b-b6a7-951d0aaecf72" Nov 5 15:56:00.416436 kubelet[2757]: E1105 15:56:00.416155 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8547764-tm5md" podUID="f1d50c3f-0506-4ceb-8aba-ac1f5be110f0" Nov 5 15:56:01.415432 kubelet[2757]: E1105 15:56:01.415349 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57995d6575-xtv6f" podUID="d5f77b74-d251-41a0-9423-d917b9539249" Nov 5 15:56:03.287316 systemd[1]: Started sshd@13-134.199.212.97:22-139.178.68.195:42758.service - OpenSSH per-connection server daemon (139.178.68.195:42758). Nov 5 15:56:03.378747 sshd[4954]: Accepted publickey for core from 139.178.68.195 port 42758 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:56:03.381308 sshd-session[4954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:56:03.388741 systemd-logind[1561]: New session 14 of user core. Nov 5 15:56:03.397745 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 15:56:03.601876 sshd[4957]: Connection closed by 139.178.68.195 port 42758 Nov 5 15:56:03.603513 sshd-session[4954]: pam_unix(sshd:session): session closed for user core Nov 5 15:56:03.613031 systemd[1]: sshd@13-134.199.212.97:22-139.178.68.195:42758.service: Deactivated successfully. Nov 5 15:56:03.618701 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 15:56:03.622201 systemd-logind[1561]: Session 14 logged out. Waiting for processes to exit. Nov 5 15:56:03.626838 systemd-logind[1561]: Removed session 14. Nov 5 15:56:06.415419 kubelet[2757]: E1105 15:56:06.414668 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:56:07.418018 containerd[1597]: time="2025-11-05T15:56:07.417942332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:56:07.820262 containerd[1597]: time="2025-11-05T15:56:07.820117792Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:56:07.820913 containerd[1597]: time="2025-11-05T15:56:07.820866486Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:56:07.821026 containerd[1597]: time="2025-11-05T15:56:07.820965323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:56:07.821406 kubelet[2757]: E1105 15:56:07.821221 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:56:07.821406 kubelet[2757]: E1105 15:56:07.821368 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:56:07.821921 kubelet[2757]: E1105 15:56:07.821601 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kh6kd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-57995d6575-xst49_calico-apiserver(ddc61783-6e23-40f0-a07f-5214382089f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:56:07.822996 kubelet[2757]: E1105 15:56:07.822823 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57995d6575-xst49" podUID="ddc61783-6e23-40f0-a07f-5214382089f3" Nov 5 15:56:08.619040 systemd[1]: Started sshd@14-134.199.212.97:22-139.178.68.195:42766.service - OpenSSH per-connection server daemon (139.178.68.195:42766). Nov 5 15:56:08.728872 sshd[4969]: Accepted publickey for core from 139.178.68.195 port 42766 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:56:08.731979 sshd-session[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:56:08.740361 systemd-logind[1561]: New session 15 of user core. Nov 5 15:56:08.749784 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 15:56:09.003990 sshd[4972]: Connection closed by 139.178.68.195 port 42766 Nov 5 15:56:09.006443 sshd-session[4969]: pam_unix(sshd:session): session closed for user core Nov 5 15:56:09.014600 systemd[1]: sshd@14-134.199.212.97:22-139.178.68.195:42766.service: Deactivated successfully. Nov 5 15:56:09.018856 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 15:56:09.021036 systemd-logind[1561]: Session 15 logged out. Waiting for processes to exit. Nov 5 15:56:09.025801 systemd-logind[1561]: Removed session 15. Nov 5 15:56:10.416382 containerd[1597]: time="2025-11-05T15:56:10.416319774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:56:10.778785 containerd[1597]: time="2025-11-05T15:56:10.778699508Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:56:10.780049 containerd[1597]: time="2025-11-05T15:56:10.779913362Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:56:10.780049 containerd[1597]: time="2025-11-05T15:56:10.779962186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:56:10.780570 kubelet[2757]: E1105 15:56:10.780474 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:56:10.780570 kubelet[2757]: E1105 15:56:10.780558 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:56:10.781379 kubelet[2757]: E1105 15:56:10.780976 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-grzbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7b9468c484-bwwkq_calico-system(86b40fbc-18e1-4614-aac7-5268cc15773b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:56:10.781756 containerd[1597]: time="2025-11-05T15:56:10.781703911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:56:10.782291 kubelet[2757]: E1105 15:56:10.782187 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b9468c484-bwwkq" podUID="86b40fbc-18e1-4614-aac7-5268cc15773b" Nov 5 15:56:11.312666 containerd[1597]: time="2025-11-05T15:56:11.312597057Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:56:11.313824 containerd[1597]: time="2025-11-05T15:56:11.313740232Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:56:11.314267 containerd[1597]: time="2025-11-05T15:56:11.313887557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:56:11.314336 kubelet[2757]: E1105 15:56:11.314148 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:56:11.314336 kubelet[2757]: E1105 15:56:11.314209 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:56:11.314832 kubelet[2757]: E1105 15:56:11.314734 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mqg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5fjkv_calico-system(ca782dd5-c75b-4c0f-9e74-4db41ed6ac62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:56:11.318952 containerd[1597]: time="2025-11-05T15:56:11.318875697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:56:11.629794 containerd[1597]: time="2025-11-05T15:56:11.629566148Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:56:11.630871 containerd[1597]: time="2025-11-05T15:56:11.630783368Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:56:11.631111 containerd[1597]: time="2025-11-05T15:56:11.630847279Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:56:11.631167 kubelet[2757]: E1105 15:56:11.631094 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:56:11.631167 kubelet[2757]: E1105 15:56:11.631156 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:56:11.631683 kubelet[2757]: E1105 15:56:11.631558 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mqg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5fjkv_calico-system(ca782dd5-c75b-4c0f-9e74-4db41ed6ac62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:56:11.632028 containerd[1597]: time="2025-11-05T15:56:11.631740326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:56:11.633159 kubelet[2757]: E1105 15:56:11.632875 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5fjkv" podUID="ca782dd5-c75b-4c0f-9e74-4db41ed6ac62" Nov 5 15:56:12.031133 containerd[1597]: time="2025-11-05T15:56:12.031062462Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:56:12.032700 containerd[1597]: time="2025-11-05T15:56:12.032419219Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:56:12.032700 containerd[1597]: time="2025-11-05T15:56:12.032536894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:56:12.032959 kubelet[2757]: E1105 15:56:12.032794 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:56:12.032959 kubelet[2757]: E1105 15:56:12.032850 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:56:12.034960 kubelet[2757]: E1105 15:56:12.032997 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wt9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rjpbz_calico-system(aa9bd767-dbec-475c-8411-c4b48f98eada): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:56:12.034960 kubelet[2757]: E1105 15:56:12.034441 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rjpbz" podUID="aa9bd767-dbec-475c-8411-c4b48f98eada" Nov 5 15:56:13.415594 containerd[1597]: time="2025-11-05T15:56:13.415436635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:56:13.770444 containerd[1597]: time="2025-11-05T15:56:13.770253978Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:56:13.771585 containerd[1597]: time="2025-11-05T15:56:13.771443801Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:56:13.771585 containerd[1597]: time="2025-11-05T15:56:13.771516514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:56:13.772425 kubelet[2757]: E1105 15:56:13.771759 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:56:13.772425 kubelet[2757]: E1105 15:56:13.771829 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:56:13.774993 kubelet[2757]: E1105 15:56:13.774591 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5kb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c8547764-tm5md_calico-apiserver(f1d50c3f-0506-4ceb-8aba-ac1f5be110f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:56:13.775167 containerd[1597]: time="2025-11-05T15:56:13.773505234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:56:13.776092 kubelet[2757]: E1105 15:56:13.775936 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8547764-tm5md" podUID="f1d50c3f-0506-4ceb-8aba-ac1f5be110f0" Nov 5 15:56:14.021740 systemd[1]: Started sshd@15-134.199.212.97:22-139.178.68.195:36624.service - OpenSSH per-connection server daemon (139.178.68.195:36624). Nov 5 15:56:14.172027 sshd[4991]: Accepted publickey for core from 139.178.68.195 port 36624 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:56:14.174234 sshd-session[4991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:56:14.182734 systemd-logind[1561]: New session 16 of user core. Nov 5 15:56:14.194752 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 15:56:14.256514 containerd[1597]: time="2025-11-05T15:56:14.256416390Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:56:14.258054 containerd[1597]: time="2025-11-05T15:56:14.257530204Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:56:14.258054 containerd[1597]: time="2025-11-05T15:56:14.257669624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:56:14.258294 kubelet[2757]: E1105 15:56:14.257953 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:56:14.258294 kubelet[2757]: E1105 15:56:14.258031 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:56:14.258294 kubelet[2757]: E1105 15:56:14.258184 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c09760b7d2ff444a8ecf03cdbfb0da0f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lplbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d4fd8787-9gsmz_calico-system(edd0e550-b3db-4c4b-b6a7-951d0aaecf72): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:56:14.262311 containerd[1597]: time="2025-11-05T15:56:14.262259591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:56:14.390555 sshd[4994]: Connection closed by 139.178.68.195 port 36624 Nov 5 15:56:14.392526 sshd-session[4991]: pam_unix(sshd:session): session closed for user core Nov 5 15:56:14.402811 systemd-logind[1561]: Session 16 logged out. Waiting for processes to exit. Nov 5 15:56:14.404008 systemd[1]: sshd@15-134.199.212.97:22-139.178.68.195:36624.service: Deactivated successfully. Nov 5 15:56:14.407901 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 15:56:14.413547 systemd-logind[1561]: Removed session 16. Nov 5 15:56:14.633019 containerd[1597]: time="2025-11-05T15:56:14.632953914Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:56:14.634079 containerd[1597]: time="2025-11-05T15:56:14.634021973Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:56:14.634277 containerd[1597]: time="2025-11-05T15:56:14.634203229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:56:14.634736 kubelet[2757]: E1105 15:56:14.634627 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:56:14.634736 kubelet[2757]: E1105 15:56:14.634707 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:56:14.635614 kubelet[2757]: E1105 15:56:14.635480 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lplbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d4fd8787-9gsmz_calico-system(edd0e550-b3db-4c4b-b6a7-951d0aaecf72): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:56:14.636917 kubelet[2757]: E1105 15:56:14.636842 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d4fd8787-9gsmz" podUID="edd0e550-b3db-4c4b-b6a7-951d0aaecf72" Nov 5 15:56:15.415594 containerd[1597]: time="2025-11-05T15:56:15.415293724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:56:15.780644 containerd[1597]: time="2025-11-05T15:56:15.780528658Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:56:15.781435 containerd[1597]: time="2025-11-05T15:56:15.781290338Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:56:15.781435 containerd[1597]: time="2025-11-05T15:56:15.781342921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:56:15.781635 kubelet[2757]: E1105 15:56:15.781587 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:56:15.781968 kubelet[2757]: E1105 15:56:15.781644 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:56:15.781968 kubelet[2757]: E1105 15:56:15.781771 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ccbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-57995d6575-xtv6f_calico-apiserver(d5f77b74-d251-41a0-9423-d917b9539249): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:56:15.783413 kubelet[2757]: E1105 15:56:15.783354 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57995d6575-xtv6f" podUID="d5f77b74-d251-41a0-9423-d917b9539249" Nov 5 15:56:19.410956 systemd[1]: Started sshd@16-134.199.212.97:22-139.178.68.195:36630.service - OpenSSH per-connection server daemon (139.178.68.195:36630). Nov 5 15:56:19.413480 kubelet[2757]: E1105 15:56:19.413403 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:56:19.416998 kubelet[2757]: E1105 15:56:19.416937 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57995d6575-xst49" podUID="ddc61783-6e23-40f0-a07f-5214382089f3" Nov 5 15:56:19.506059 sshd[5006]: Accepted publickey for core from 139.178.68.195 port 36630 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:56:19.507834 sshd-session[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:56:19.516127 systemd-logind[1561]: New session 17 of user core. Nov 5 15:56:19.521811 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 15:56:19.758381 sshd[5009]: Connection closed by 139.178.68.195 port 36630 Nov 5 15:56:19.759616 sshd-session[5006]: pam_unix(sshd:session): session closed for user core Nov 5 15:56:19.774702 systemd[1]: sshd@16-134.199.212.97:22-139.178.68.195:36630.service: Deactivated successfully. Nov 5 15:56:19.778098 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 15:56:19.779590 systemd-logind[1561]: Session 17 logged out. Waiting for processes to exit. Nov 5 15:56:19.786897 systemd[1]: Started sshd@17-134.199.212.97:22-139.178.68.195:36636.service - OpenSSH per-connection server daemon (139.178.68.195:36636). Nov 5 15:56:19.788779 systemd-logind[1561]: Removed session 17. Nov 5 15:56:19.910450 sshd[5021]: Accepted publickey for core from 139.178.68.195 port 36636 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:56:19.911844 sshd-session[5021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:56:19.919708 systemd-logind[1561]: New session 18 of user core. Nov 5 15:56:19.926688 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 15:56:20.314556 sshd[5026]: Connection closed by 139.178.68.195 port 36636 Nov 5 15:56:20.316373 sshd-session[5021]: pam_unix(sshd:session): session closed for user core Nov 5 15:56:20.333368 systemd[1]: Started sshd@18-134.199.212.97:22-139.178.68.195:36652.service - OpenSSH per-connection server daemon (139.178.68.195:36652). Nov 5 15:56:20.335315 systemd[1]: sshd@17-134.199.212.97:22-139.178.68.195:36636.service: Deactivated successfully. Nov 5 15:56:20.344803 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 15:56:20.347198 systemd-logind[1561]: Session 18 logged out. Waiting for processes to exit. Nov 5 15:56:20.351028 systemd-logind[1561]: Removed session 18. Nov 5 15:56:20.469836 sshd[5033]: Accepted publickey for core from 139.178.68.195 port 36652 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:56:20.472284 sshd-session[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:56:20.480482 systemd-logind[1561]: New session 19 of user core. Nov 5 15:56:20.488766 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 15:56:21.234098 sshd[5039]: Connection closed by 139.178.68.195 port 36652 Nov 5 15:56:21.238240 sshd-session[5033]: pam_unix(sshd:session): session closed for user core Nov 5 15:56:21.253221 systemd[1]: sshd@18-134.199.212.97:22-139.178.68.195:36652.service: Deactivated successfully. Nov 5 15:56:21.259640 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 15:56:21.265992 systemd-logind[1561]: Session 19 logged out. Waiting for processes to exit. Nov 5 15:56:21.273898 systemd[1]: Started sshd@19-134.199.212.97:22-139.178.68.195:36666.service - OpenSSH per-connection server daemon (139.178.68.195:36666). Nov 5 15:56:21.276980 systemd-logind[1561]: Removed session 19. Nov 5 15:56:21.496962 sshd[5054]: Accepted publickey for core from 139.178.68.195 port 36666 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:56:21.501751 sshd-session[5054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:56:21.517493 systemd-logind[1561]: New session 20 of user core. Nov 5 15:56:21.525717 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 15:56:22.029804 sshd[5059]: Connection closed by 139.178.68.195 port 36666 Nov 5 15:56:22.030979 sshd-session[5054]: pam_unix(sshd:session): session closed for user core Nov 5 15:56:22.050955 systemd[1]: sshd@19-134.199.212.97:22-139.178.68.195:36666.service: Deactivated successfully. Nov 5 15:56:22.055136 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 15:56:22.060156 systemd-logind[1561]: Session 20 logged out. Waiting for processes to exit. Nov 5 15:56:22.068473 systemd-logind[1561]: Removed session 20. Nov 5 15:56:22.071953 systemd[1]: Started sshd@20-134.199.212.97:22-139.178.68.195:36680.service - OpenSSH per-connection server daemon (139.178.68.195:36680). Nov 5 15:56:22.251502 sshd[5069]: Accepted publickey for core from 139.178.68.195 port 36680 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:56:22.253592 sshd-session[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:56:22.262409 systemd-logind[1561]: New session 21 of user core. Nov 5 15:56:22.271798 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 15:56:22.419531 kubelet[2757]: E1105 15:56:22.416786 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b9468c484-bwwkq" podUID="86b40fbc-18e1-4614-aac7-5268cc15773b" Nov 5 15:56:22.452847 sshd[5072]: Connection closed by 139.178.68.195 port 36680 Nov 5 15:56:22.453809 sshd-session[5069]: pam_unix(sshd:session): session closed for user core Nov 5 15:56:22.461818 systemd[1]: sshd@20-134.199.212.97:22-139.178.68.195:36680.service: Deactivated successfully. Nov 5 15:56:22.465802 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 15:56:22.467875 systemd-logind[1561]: Session 21 logged out. Waiting for processes to exit. Nov 5 15:56:22.471174 systemd-logind[1561]: Removed session 21. Nov 5 15:56:24.419048 kubelet[2757]: E1105 15:56:24.418459 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5fjkv" podUID="ca782dd5-c75b-4c0f-9e74-4db41ed6ac62" Nov 5 15:56:25.416514 kubelet[2757]: E1105 15:56:25.416464 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d4fd8787-9gsmz" podUID="edd0e550-b3db-4c4b-b6a7-951d0aaecf72" Nov 5 15:56:26.414127 kubelet[2757]: E1105 15:56:26.414074 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rjpbz" podUID="aa9bd767-dbec-475c-8411-c4b48f98eada" Nov 5 15:56:26.914292 containerd[1597]: time="2025-11-05T15:56:26.914247972Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f73116df706bbeeb154b115552f57ea4572bc23e1ba547d9f10f31250c2285d\" id:\"b90c326ae8bb4d415c6e1b1d07214676d56a8c99ab7c2f4c87c4f5839231f140\" pid:5096 exited_at:{seconds:1762358186 nanos:913728679}" Nov 5 15:56:27.470144 systemd[1]: Started sshd@21-134.199.212.97:22-139.178.68.195:44858.service - OpenSSH per-connection server daemon (139.178.68.195:44858). Nov 5 15:56:27.564798 sshd[5108]: Accepted publickey for core from 139.178.68.195 port 44858 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:56:27.567160 sshd-session[5108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:56:27.573053 systemd-logind[1561]: New session 22 of user core. Nov 5 15:56:27.583668 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 15:56:27.800530 sshd[5111]: Connection closed by 139.178.68.195 port 44858 Nov 5 15:56:27.799194 sshd-session[5108]: pam_unix(sshd:session): session closed for user core Nov 5 15:56:27.806793 systemd[1]: sshd@21-134.199.212.97:22-139.178.68.195:44858.service: Deactivated successfully. Nov 5 15:56:27.811867 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 15:56:27.815695 systemd-logind[1561]: Session 22 logged out. Waiting for processes to exit. Nov 5 15:56:27.819265 systemd-logind[1561]: Removed session 22. Nov 5 15:56:28.416061 kubelet[2757]: E1105 15:56:28.414944 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8547764-tm5md" podUID="f1d50c3f-0506-4ceb-8aba-ac1f5be110f0" Nov 5 15:56:30.415657 kubelet[2757]: E1105 15:56:30.415076 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:56:30.416196 kubelet[2757]: E1105 15:56:30.415971 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57995d6575-xtv6f" podUID="d5f77b74-d251-41a0-9423-d917b9539249" Nov 5 15:56:32.817884 systemd[1]: Started sshd@22-134.199.212.97:22-139.178.68.195:44874.service - OpenSSH per-connection server daemon (139.178.68.195:44874). Nov 5 15:56:32.891751 sshd[5130]: Accepted publickey for core from 139.178.68.195 port 44874 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:56:32.894667 sshd-session[5130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:56:32.903110 systemd-logind[1561]: New session 23 of user core. Nov 5 15:56:32.908769 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 5 15:56:33.052104 sshd[5133]: Connection closed by 139.178.68.195 port 44874 Nov 5 15:56:33.052811 sshd-session[5130]: pam_unix(sshd:session): session closed for user core Nov 5 15:56:33.058509 systemd-logind[1561]: Session 23 logged out. Waiting for processes to exit. Nov 5 15:56:33.059179 systemd[1]: sshd@22-134.199.212.97:22-139.178.68.195:44874.service: Deactivated successfully. Nov 5 15:56:33.062757 systemd[1]: session-23.scope: Deactivated successfully. Nov 5 15:56:33.065732 systemd-logind[1561]: Removed session 23. Nov 5 15:56:33.425960 kubelet[2757]: E1105 15:56:33.425271 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57995d6575-xst49" podUID="ddc61783-6e23-40f0-a07f-5214382089f3" Nov 5 15:56:35.419503 kubelet[2757]: E1105 15:56:35.419173 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b9468c484-bwwkq" podUID="86b40fbc-18e1-4614-aac7-5268cc15773b" Nov 5 15:56:37.416834 kubelet[2757]: E1105 15:56:37.416775 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5fjkv" podUID="ca782dd5-c75b-4c0f-9e74-4db41ed6ac62" Nov 5 15:56:37.417936 kubelet[2757]: E1105 15:56:37.416919 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d4fd8787-9gsmz" podUID="edd0e550-b3db-4c4b-b6a7-951d0aaecf72" Nov 5 15:56:38.072090 systemd[1]: Started sshd@23-134.199.212.97:22-139.178.68.195:49442.service - OpenSSH per-connection server daemon (139.178.68.195:49442). Nov 5 15:56:38.228359 sshd[5144]: Accepted publickey for core from 139.178.68.195 port 49442 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:56:38.231831 sshd-session[5144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:56:38.241064 systemd-logind[1561]: New session 24 of user core. Nov 5 15:56:38.246712 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 5 15:56:38.633775 sshd[5147]: Connection closed by 139.178.68.195 port 49442 Nov 5 15:56:38.633652 sshd-session[5144]: pam_unix(sshd:session): session closed for user core Nov 5 15:56:38.641960 systemd[1]: sshd@23-134.199.212.97:22-139.178.68.195:49442.service: Deactivated successfully. Nov 5 15:56:38.648627 systemd[1]: session-24.scope: Deactivated successfully. Nov 5 15:56:38.651216 systemd-logind[1561]: Session 24 logged out. Waiting for processes to exit. Nov 5 15:56:38.658849 systemd-logind[1561]: Removed session 24. Nov 5 15:56:39.414078 kubelet[2757]: E1105 15:56:39.413948 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rjpbz" podUID="aa9bd767-dbec-475c-8411-c4b48f98eada" Nov 5 15:56:40.419550 kubelet[2757]: E1105 15:56:40.419465 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8547764-tm5md" podUID="f1d50c3f-0506-4ceb-8aba-ac1f5be110f0" Nov 5 15:56:42.415939 kubelet[2757]: E1105 15:56:42.415078 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57995d6575-xtv6f" podUID="d5f77b74-d251-41a0-9423-d917b9539249" Nov 5 15:56:43.650735 systemd[1]: Started sshd@24-134.199.212.97:22-139.178.68.195:41526.service - OpenSSH per-connection server daemon (139.178.68.195:41526). Nov 5 15:56:43.722253 sshd[5159]: Accepted publickey for core from 139.178.68.195 port 41526 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:56:43.724728 sshd-session[5159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:56:43.733346 systemd-logind[1561]: New session 25 of user core. Nov 5 15:56:43.738608 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 5 15:56:43.935478 sshd[5162]: Connection closed by 139.178.68.195 port 41526 Nov 5 15:56:43.935216 sshd-session[5159]: pam_unix(sshd:session): session closed for user core Nov 5 15:56:43.946552 systemd[1]: sshd@24-134.199.212.97:22-139.178.68.195:41526.service: Deactivated successfully. Nov 5 15:56:43.946822 systemd-logind[1561]: Session 25 logged out. Waiting for processes to exit. Nov 5 15:56:43.952685 systemd[1]: session-25.scope: Deactivated successfully. Nov 5 15:56:43.958373 systemd-logind[1561]: Removed session 25. Nov 5 15:56:45.414354 kubelet[2757]: E1105 15:56:45.414301 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57995d6575-xst49" podUID="ddc61783-6e23-40f0-a07f-5214382089f3"